Presented and published at the 2025 USENIX Security Symposium, the paper shows how AI browsers including ChatGPT for Google, Sider, Monica, Merlin, MaxAI, Perplexity, HARPA, TinaMind, and Microsoft’s Copilot analyze, store, and recall user information. Researchers simulated real-world browsing scenarios in private and public spaces, ranging from reading the news and watching YouTube, to accessing pornography, to filling out tax documents. They tested privacy safeguards with targeted prompts, revealing what data was collected, and finding that extensions recorded images, and written contents including medical diagnoses, social security numbers, and preferences on dating apps. The Merlin extension, for example, transmitted banking details, and health records. Some, including Merlin and Sider AI, even recorded activity on private browsers.
Researchers decrypted traffic to gauge where data was stored, revealing that several assistants transmitted webpage content to their own servers, and third-party trackers. Sider and TinaMind shared user prompts, and identifying information like IP addresses, with Google Analytics, allowing users to be tracked across sites. Several AI agents, including Microsoft’s Copilot, stored chat histories from previous sessions on the background of the browser, which could mean such logs persisted across sessions. Google, Copilot, Monica, ChatGPT, and Sider used user activity to profile customers by age, gender, income, and interests to personalize responses across browsing sessions.
Of the assistants analyzed, researchers determined Perplexity to be the most private, as it could not recall previous interactions, and servers never accessed personal data in private spaces. However, Perplexity still analyzed page titles, and location.