Security experts have raised significant concerns about the privacy, safety, and data management of new AI-powered browsers, including OpenAI’s newly launched ChatGPT Atlas, following recent research disclosures and new classes of attack targeting AI-enabled browsing features.
Researchers from JFrog recently disclosed CVE-2025-6515, a vulnerability found in the Oat++ implementation of Anthropic’s Model Context Protocol (MCP). This vulnerability centres around predictable session IDs, potentially allowing attackers to hijack ongoing AI agent sessions and insert malicious responses into active conversations. The flaw highlights the growing security complexities introduced by advanced conversational AI agents integrated into web workflows.
Ken Johnson, Chief Technology Officer at DryRun Security, commented on the broader implications of such flaws in the context of modern AI and web security:
This is exactly the kind of complex logic flaw, specifically insecure session management that pattern-matching scanners will never catch. The attack is deceptively simple but technically elegant: an adversary floods the server with session requests, detaches those supposedly unguessable identifiers from their original users, and waits until one of those values is reassigned to a legitimate session. From there, the impact can be devastating, giving access to confidential conversations, product roadmaps, API keys, or credentials, all without detection. What makes this so dangerous is its subtlety; the victim might never realise it happened. These flaws live in how we handle state, identity, and session lifecycle-not in the surface-level code.
The discovery of CVE-2025-6515 comes as adoption of AI-powered browsing grows. OpenAI’s recent unveiling of ChatGPT Atlas has accelerated dialogue regarding data privacy, user tracking, and the risks associated with embedding AI into fundamental web experiences. ChatGPT Atlas introduces features such as “browser memories” and “agent mode”, enabling the AI to recall past user activity and carry out tasks autonomously, but also sparking queries about the extent of data access such capabilities necessitate.
Privacy risks
Dray Agha, Senior Manager of Security Operations at Huntress, spotlighted a disparity between OpenAI’s privacy assurances and implementation, noting that model training on user data was enabled by default at launch, despite indications to the contrary. Agha warned that browser memories, which aggregate detailed browsing profiles, present ongoing privacy risks due to uncertainty around their storage, processing, and security. The agent mode, meanwhile, raises “new questions about user control and security”, particularly if entrusted with sensitive operations such as online shopping with stored credentials.
Eamonn Maguire, Director of Engineering, AI and ML at Proton, expressed concerns over the deep personal data integration that AI browsing enables. Maguire emphasised that:
Search has always been surveillance. AI search made it intimate surveillance. OpenAI’s new ChatGPT Atlas takes another step – total surveillance… It tracks everywhere you go, what you think, want, and feel. The result? Surveillance capitalism’s final form: AI so helpful, so conversational, so human-feeling that users willingly divulge intimate details while handing over browser-level access to their entire digital life. Google Search and Chrome gave one company too much power. Atlas is that power multiplied by a thousand.
Maguire also questioned data retention and access practices and stated that “until those questions are answered transparently, we should treat Atlas and any AI-powered browser as a surveillance tool first and a productivity tool second.”
Technical exploitation
The expanding threat surface of AI browsers has been outlined in newly published research from SquareX, which uncovered an “AI Sidebar Spoofing” attack. The attack exploits browser extensions to impersonate trusted AI sidebar interfaces, tricking users into performing dangerous actions. By mimicking legitimate AI-generated instructions, malicious sidebars can exfiltrate credentials, hijack devices, or redirect users to phishing sites. The researchers demonstrated that this approach threatens not only AI-specific browsers like Comet but consumer browsers integrating AI, such as Brave and Edge, as well.
Vivek Ramachandran, Founder and Chief Executive at SquareX, explained that the proliferation of AI interfaces has created a new dynamic, where “people blindly follow AI-generated instructions without the expertise to identify security risks.” SquareX detailed several potential attack scenarios, including the exfiltration of cryptocurrency credentials by substituting legitimate links with phishing sites in AI-generated responses. The threats extend to device hijacking and ransomware distribution, often going undetected due to the extension remaining dormant until it seizes on an opportunity based on user prompts.
SquareX’s research found that these attacks relied on common, seemingly benign browser extension permissions and urged organisations to adopt dynamic analysis and granular guardrails for browser-native threats. Attacks can be delivered even if AI browsers themselves are restricted, as the vulnerability lies in browser extensions that can be installed on any traditional browser.
Calls for safeguards
Charlotte Wilson, Head of Enterprise at Check Point Software, said that while agentic browsing and AI integration bring enhanced convenience, they also “introduce a critical hidden vulnerability: misplaced trust.” Wilson emphasised the risk of personal and sensitive data accumulation for profiling or exploitation and noted that “attacking AI systems no longer requires sophisticated code. Modern exploits now rely on natural language and social engineering, drastically lowering the bar for entry.”
Recent incidents also highlight the importance of robust session management and lifecycle security. The nature of attacks is evolving from traditional technical exploits to social engineering, prompt manipulation, and leveraging predictable behaviour patterns in both humans and AI.
Managing risk
Dr. Martin Kraemer, CISO Advisor at KnowBe4, stated that the release of OpenAI’s Atlas demonstrates the requirement for organisations to focus on the human-AI interaction and to strengthen training, governance, and secure use policies to mitigate downstream risks. Javvad Malik, Lead CISO Advisor at KnowBe4, pointed out the danger of data aggregation within AI browsers, warning that people will inevitably “overshare” and may blur the lines between work and personal use. Malik recommended using separate profiles and blocking access to sensitive files from consumer AI services to limit potential exposure.
Industry voices have called for AI developers and browser providers to default to opt-out privacy modes, provide clarity on data retention, and deliver transparent disclosures about how data is used, as well as subject their platforms to independent audit. There is consensus that guardrails and access controls are needed to avoid transforming breakthroughs in AI-enabled browsing into significant vectors for privacy loss and cyber attack.