S and V Design / iStock / Getty Images Plus
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways AI browsers are powerful, but not necessarily secure.Experts warn of rising prompt injection and data theft risks.Use AI browsers carefully and protect your data.
This year has certainly been the year for artificial intelligence (AI) development.
With the sudden launch of OpenAI’s ChatGPT, businesses worldwide scrambled to implement the chatbot and its associated applications into their workflows; academics suddenly had to begin checking student submissions for AI plagiarism; and AI models appeared for everything from image and music generation to erotica.
Also: Is OpenAI’s Atlas browser the Chrome killer we’ve been waiting for? Try it for yourself
Billions of dollars have been poured into not only AI-powered chatbots, but also large language models (LLMs) and niche applications. AI agents and browsers are now the next evolution.
OpenAI’s Atlas makes its debut
Announced on Tuesday, OpenAI* has released ChatGPT Atlas, described as “the browser with ChatGPT built in.”
But under the hood, it’s far more than that. Joining the likes of Perplexity’s Comet, Dia, and Gemini-enabled Google Chrome, Atlas is available on Mac to begin with, with updates already promised to refine the new browser.
The OpenAI team has described Atlas as an AI browser built around ChatGPT. The chatbot integrates with each search query you submit and any open tabs and can use their content and data to answer queries or perform tasks.
Also:Â Perplexity will give you $20 for every friend you refer to Comet – how to get your cash
Based on early testing, such as when ZDNET editor Elyse Betters Picaro tasked Atlas with ordering groceries on her behalf from Walmart, the browser has promise. Uses include online ordering, email editing, conversation summarization, general queries, and even analyzing GitHub repos.
“With Atlas, ChatGPT can come with you anywhere across the web, helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page,” OpenAI says. “Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.”
However, Atlas — alongside other AI-based browsers — raises security and privacy questions that need to be answered.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Prompt injections
Prompt injections have become an area of real concern to cybersecurity experts. A prompt injection attack occurs when a threat actor manipulates an LLM into acting in a harmful way. An attack designed to steal user data could be disguised as a genuine prompt that ignores existing security measures and overrides developer instructions.
There are two types of prompt injection: a direct injection based on user input or an indirect hijack made through payloads hidden in content that an LLM scrapes, such as on a web page.
Brave researchers previously disclosed indirect prompt injection issues in Comet, and following on from this research, have discovered and disclosed new prompt injection attacks not only in Comet but also in Fellou.
Also:Â Free AI-powered Dia browser now available to all Mac users – Windows users can join a waitlist
“Agentic browser assistants can be prompt-injected by untrusted webpage content, rendering protections such as the same-origin policy irrelevant because the assistant executes with the user’s authenticated privileges,” Brave commented. “This lets simple natural-language instructions on websites (or even just a Reddit comment) trigger cross-domain actions that reach banks, healthcare provider sites, corporate systems, email hosts, and cloud storage.”
Expert developer and co-creator of the Django Web Framework, Simon Willison, has been closely following movements in the AI browser world and remains “deeply skeptical” of the agentic and AI agent-based browser sector as a whole, noting that when you allow a browser to take actions on your behalf, even asking for a basic summary of a Reddit post could potentially lead to data exfiltration.
ZDNET asked OpenAI about the security measures implemented to prevent prompt injection and whether further improvements are in the pipeline. The team referred us to the help center, which outlines how users can set up granular controls, and to an X post penned by Dane Stuckey, OpenAI’s chief information security officer.
Stuckey says that OpenAI has “prioritized rapid response systems to help us quickly identify [and] block attack campaigns as we become aware of them,” and the company is investing “heavily” in security measures to prevent prompt injection attacks.
Sensitive data handling
Another significant security issue is trust, and whether or not you allow a browser — and LLM — to access and handle your personal data.
To allow an AI browser to perform specific tasks for you, you may be required to allow the browser access to account data, keychains, and credentials.
According to Stuckey, Atlas has an optional “logged-out mode” that does not give ChatGPT access to your credentials, and if an agent is working on a sensitive website, “Watch mode” requires users to keep the tab open to monitor the agent at work.
“[The] agent will pause if you move away from the tab with sensitive information,” the executive says. “This ensures you stay aware — and in control — of what actions the agent is performing.”
Also:Â This new Google Gemini model scrolls the internet just like you do – how it works
It’s an interesting idea, and perhaps the logged-out mode should be enabled by default. We are yet to see if this information and access can be safely handled, however, by any AI browser in the long term.
It’s also worth noting that in a new report released by Aikido, in a survey of 450 CISOs, security engineers, and developers across Europe and the US, four out of five respondent companies said they had experienced a cybersecurity incident tied to AI code. Powerful, new, and shiny tech doesn’t always mean secure.
Alex Lisle, the CTO of Reality Defender, told ZDNET that to trust the sum total of your browsing history and everything after to a browser “is a fool’s errand.”
“Not a week goes by without a new flaw or exploit on these browsers en masse, and while major/mainstream browsers are constantly hacked, they’re patched and better maintained than the patchwork that is the current AI browser ecosystem,” Lisle added.
Surveillance
Another emerging issue is surveillance. While we recommend you use a secure browser for your search queries so your activities aren’t logged or tracked, AI browsers, by design, add context to your search queries through follow-up questions, web page visit logs and analysis, prompts, and more.
Eamonn Maguire, director of engineering, AI and ML, at Proton, commented:
“Search has always been surveillance. AI browsers have simply made it personal. […] Users now share the kinds of details they’d never type into a search box, from health worries and finances to relationships and business plans. This isn’t just more data; it’s coherent, narrative data that reveals who you are, how you think, and what you’ll do next.”
Also:Â Opera agentic browser Neon starts rolling out to users – how to join the waitlist
Calling the convergence of search, browsing, and automation an “unprecedented” level of insight into user behavior, Maguire added that “unless transparency catches up with capability, AI browsing risks becoming surveillance capitalism’s most intimate form yet.”
“The solution is not to reject innovation, but to rethink it. AI assistance doesn’t have to come at the expense of privacy. We need clear answers to key questions: how long is data stored, who has access to it, and can aggregated activity still train models? Until there’s real transparency and control, users should treat AI browsers as potential surveillance tools first and productivity aids second.”
Should I use an AI browser?
As noted by Willison, in application security, “99% is a failing grade,” as “if there’s a way to get past the guardrails, no matter how obscure, a motivated adversarial attacker is going to figure that out.”
There are many “what ifs” surrounding AI browser usage right now, and for some security and programming experts like Willison, they won’t trust them until “a bunch of security researchers have given them a very thorough beating.”
Who knows — perhaps zero-day prompt injection fixes will become a standalone category in monthly patch cycles in the future.
Speaking to ZDNET, Brian Grinstead, senior principal engineer at Mozilla, said that the “fundamental security problem for the current crop of agentic browsers is that even the best LLMs today do not have the ability to separate trusted content coming from the user and untrusted content coming from web pages.”
Also:Â I use Edge as my default browser – but its new AI mode is unreliable and annoying
“Recent agentic browsing product launches have reported prompt injection attack success rates in the low double digits, which would be considered catastrophic in any traditional browser feature,” the executive commented. “We wouldn’t release a new JavaScript API that let a web page take control of the browser 10% of the time, even if the page asked politely.”
Grinstead recommends that if you want to check out an AI browser, you should avoid giving it access to your private data and avoid loading any untrusted content — and not just on suspicious or unsecure websites, but with the point in mind that untrusted data can appear on otherwise trustworthy websites, such as product reviews or Reddit posts.
In addition, the executive recommends that you review security settings, including what data any browser sends from your device, what it’s used for, and whether it’s stored.
Whether you choose to use an AI browser is up to you, although the stakes are high if you intend to allow new, relatively untested browsers access to your sensitive information.
Want more stories about AI? Check out AI Leaderboard, our weekly newsletter.