{"id":459214,"date":"2026-03-05T16:59:09","date_gmt":"2026-03-05T16:59:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/459214\/"},"modified":"2026-03-05T16:59:09","modified_gmt":"2026-03-05T16:59:09","slug":"ai-tools-can-unmask-anonymous-accounts","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/459214\/","title":{"rendered":"AI tools can unmask anonymous accounts\u00a0"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Do you have a Reddit alt, secret X, finsta, or Glassdoor account you trash your boss with? AI might have just made it a lot easier to unmask you. That\u2019s the conclusion of a <a href=\"https:\/\/arxiv.org\/pdf\/2602.16800\" rel=\"nofollow noopener\" target=\"_blank\">recently published study<\/a>, which hints at some uncomfortable consequences for staying private online \u2014 even if it\u2019s not quite time to hold a funeral for anonymity just yet.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The finding, which has not been peer reviewed, comes from researchers at ETH Zurich, Anthropic, and the Machine Learning Alignment and Theory Scholars program. They built an automated system of AI agents using unspecified models \u2014 capable of searching the web and interacting with information much like a human investigator \u2014 to test how effectively large language models can reidentify anonymized material. The system \u201csubstantially outperforms\u201d traditional computational techniques for deanonymizing accounts, scouring text for personal details at a grand scale.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The system works by treating posts or other texts as a set of clues. It analyzes the text for patterns \u2014 writing quirks, stray biographical details, posting frequency and timing \u2014 that might hint at someone\u2019s identity. It then scans other accounts, potentially millions of them, looking for the same mix of traits. Probable matches are flagged, compared in more detail, and winnowed down into a shortlist of likely identities.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Rather than targeting unsuspecting users, the team evaluated the system using datasets built from publicly available posts, including content from Hacker News and LinkedIn, transcripts of Anthropic\u2019s interviews with scientists on how they use AI, and Reddit accounts that were deliberately split into two anonymized halves for testing. The paper reports that in each setting the LLM-based approach correctly identified up to 68 percent of matching accounts with 90 percent precision. By contrast, comparable non-LLM methods, like connecting scattered data points across large datasets, identified almost none.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The results weren\u2019t uniform across every dataset, and, predictably, the model performed better when it had more structured information to work with. In one experiment examining Reddit users posting about films in the main r\/movies subreddit and smaller film communities, the system was able to link accounts that mentioned just one movie about 3 percent of the time at 90 percent precision. When users mentioned 10 or more films, the success rate climbed to nearly half.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">An experiment using Anthropic\u2019s survey of scientists, meanwhile, identified nine of the 125 respondents, a recall rate of roughly 7 percent. In that test, the system built a profile of each respondent based on clues in their answers and then searched publicly available information on the web for likely matches. In an example match, the researchers highlight how references to a \u201csupervisor\u201d could suggest a PhD student and that the use of British English could hint at a UK affiliation. Combined with mentions of a background in the physical sciences and current work in biology research, the system was able to narrow the field to a particular candidate.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Still, the researchers argue that the ability to identify any respondents from unstructured text is noteworthy, replicating in minutes what would have taken a human investigator hours to do. Moreover, they told The Verge that performance is likely to improve as AI systems grow more capable and gain access to larger pools of data. More broadly, they caution that it may no longer be safe to assume that posting pseudonymously will protect online identities, past or future.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">\u201cEvery single thing the LLM found in principle could be found by a human investigator.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">\u201cInformation on the internet is there forever,\u201d said Daniel Paleka, a researcher at ETH Zurich and one of the study\u2019s authors. That persistence could translate into tangible, real-world risks for journalists, dissidents, and activists relying on pseudonyms, the researchers warn, while also enabling \u201chyper-targeted advertising\u201d and \u201chighly personalized\u201d scams.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The risks of deanonymizing accounts aren\u2019t novel, nor are they unique to AI. \u201cEvery single thing the LLM found in principle could be found by a human investigator,\u201d Paleka told The Verge.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">What is new, Paleka argues, is the end-to-end automation. Work that once required a diligent investigator willing to patiently sift through posts hunting for small nuggets of information can now be carried out far more easily and across a far larger number of targets.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">It\u2019s also cheap. The researchers said their experiment cost less than $2,000, a cost of between $1 and $4 for each profile they ran the AI agent on. \u201cThe economics are totally different now,\u201d coauthor Simon Lermen told The Verge, warning that the lower barrier to entry could expand who has the ability \u2014 and incentive \u2014 to try and pierce online anonymity. Groups that have historically \u201cflown under the radar\u201d may find it hard to continue doing so, he said.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">People \u201cmight misunderstand this important research and conclude that privacy is dead.\u201d It isn\u2019t.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">It\u2019s important not to overstate the findings. \u201cWhile these algorithms are improving, they remain far from what humans can do,\u201d Luc Rocher, an associate professor at the Oxford Internet Institute, told The Verge. The work does not neatly map onto the real world; experiments were done under laboratory conditions using datasets that had been carefully curated and anonymized for the purposes of testing. They said they worry people \u201cmight misunderstand this important research and conclude that privacy is dead.\u201d It isn\u2019t, they argued.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Despite years of incremental progress in techniques designed to unmask anonymous users, \u201cthe identity of Satoshi Nakamoto, the inventor of Bitcoin, remains a mystery after more than a decade,\u201d Rocher said. Whistleblowers, they added, can still communicate with journalists without being exposed, and tools like Signal \u201chave so far been successful in protecting our collective privacy.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In the paper, the researchers said they avoided testing their system on actual pseudonymous users because of ethical concerns. For similar reasons, they did not publish the full technical details of their approach and declined to provide a demonstration when asked. The team also would not say whether they had tested the system outside the confines of the study, again citing ethical concerns, leaving open the question of how reliably it would perform against real-world accounts.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For people already deeply committed to anonymity, the practical impact may be limited. Basic precautions \u2014 keeping accounts separate, limiting personal details, avoiding identifiable patterns like posting only during waking hours in your time zone \u2014 are still critical.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">For those treating pseudonyms more casually, Paleka and Lermen advised users to think carefully about what gets posted in public forums, even accounts that feel anonymous, and to keep in mind that what\u2019s already out there can be pieced together more easily than many assume.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Responsibility shouldn\u2019t rest entirely on users, the researchers argue. Lermen said AI labs should monitor how their tools are being used and build safeguards to stop them being used to deanonymize people. Social media platforms, he added, could clamp down on the scraping and mass data extraction that make such efforts possible.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Satoshi, in other words, is probably safe from AI sleuths. Your throwaway AITA post on Reddit? That might be another matter.<\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Robert HartClose<img alt=\"Robert Hart\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"_1bw37385 x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/ROB_H_BLURPLE.jpg\"\/><\/p>\n<p>Robert Hart<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/robert-hart\" rel=\"nofollow noopener\" target=\"_blank\">See All by Robert Hart<\/a><\/p>\n<p>AIClose<\/p>\n<p>AI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>PrivacyClose<\/p>\n<p>Privacy<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/privacy\" rel=\"nofollow noopener\" target=\"_blank\">See All Privacy<\/a><\/p>\n<p>ReportClose<\/p>\n<p>Report<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n<p>TechClose<\/p>\n<p>Tech<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/tech\" rel=\"nofollow noopener\" target=\"_blank\">See All Tech<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Do you have a Reddit alt, secret X, finsta, or Glassdoor account you trash your boss with? AI&hellip;\n","protected":false},"author":2,"featured_media":459215,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,2811,1058,227,86,56,54,55],"class_list":{"0":"post-459214","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-privacy","12":"tag-report","13":"tag-tech","14":"tag-technology","15":"tag-uk","16":"tag-united-kingdom","17":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/459214","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=459214"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/459214\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/459215"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=459214"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=459214"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=459214"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}