Elon Musk’s controversial Grokipedia has begun creeping into ChatGPT and other chatbots’ responses as a cited source, giving us a glimpse of the dead internet that’s just around the corner.
The Guardian reports that OpenAI’s latest flagship model, GPT-5.2, cited Grokipedia nine times in response to more than a dozen questions. Those questions ranged from topics like political structures in Iran to British historian Sir Richard Evans. Gizmodo was also able to produce responses from ChatGPT that cited Grokipedia when making similar queries.
Musk launched Grokipedia last October as an alternative to Wikipedia, one in which humans are taken out of the editing loop. In a post in September, Musk said Grokipedia would be “a massive improvement over Wikipedia.” He has also repeatedly derided Wikipedia as “Wokipedia” and complained that there is no major alternative aligned with right-wing views.
His solution was to create a new platform with articles generated by AI. Much of Grokipedia’s content appears to be adapted from Wikipedia, but with framing that often favors Musk’s political views.
For example, Grokipedia describes the events of January 6, 2021, as a “riot” at the U.S. Capitol, which saw “supporters of outgoing President Donald Trump protest the certification of the 2020 presidential election results.” Wikipedia, by contrast, calls it an “attack” carried out by a mob of Trump supporters in what it describes as an attempted self-coup.
Additionally, Grokipedia labels Britain First as a “far-right British political party that advocates for national sovereignty,” while Wikipedia describes it as a neo-fascist political party and hate group.
Grokipedia also takes a softer framing regarding the so-called Great Replacement theory, which claims that white people are being systematically replaced by a concerted breeding effort being perpetuated by other races. Wikipedia explicitly labels the idea a conspiracy theory. Musk is an outspoken proponent of the conspiracy and regularly comments on “white genocide.”
In general, Grokipedia is designed to churn out unverified information at an industrial scale without human editors debating the quality of information it provides.
Now, Grokipedia appears to be insidiously bleeding into other chatbots. The Guardian noted that ChatGPT did not cite Grokipedia when asked about topics in which the site had been known to promote misleading information. Instead, Grokipedia only showed up in responses to more obscure topics.
The issue does not appear to be isolated to ChatGPT. Some users on social media have reported that Anthropic’s Claude has also referenced Grokipedia in its answers.
OpenAI and Anthropic, the company behind Claude, did not immediately respond to requests for comment from Gizmodo. However, OpenAI told The Guardian that its model “aims to draw from a broad range of publicly available sources and viewpoints.”
“We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,” an OpenAI spokesperson told The Guardian.
Researchers have previously warned about malicious actors flooding the internet with AI-generated content in an effort to influence large language models in a process sometimes referred to as LLM grooming. But the risks go beyond intentional misinformation campaigns.
It’s not totally clear if human users are actively visiting Grokipedia intentionally. Weeks after the site’s launch last year, data aggregator Similarweb reported that Grokipedia had fallen from a high of 460,000 web visits in the US on Oct. 28 to about 30,000 daily visitors. Wikipedia routinely racks up hundreds of millions of pageviews per day. Many have speculated that Grokipedia isn’t really for humans anyway; it exists to poison the well for future LLMs.
Over-relying on AI-generated content can also lead to what researchers call model collapse. A 2024 study found that when large language models are increasingly trained on data produced by other AI systems, their overall quality degrades over time.
“In the early stage of model collapse, first models lose variance, losing performance on minority data,” researcher Ilia Shumailov told Gizmodo at the time. “In the late stage of model collapse, [the] model breaks down fully.” As models continue training on less accurate and less relevant text they’ve generated themselves, that loop causes outputs to degrade and eventually stop making much sense at all.