If it feels as though the world is increasingly populated by people who speak with the polished, slightly verbose enthusiasm of a corporate press release, you may not be imagining things.
According to research, the rapid uptake of large language models (LLMs) such as ChatGPT, Claude and Gemini for writing prose and other tasks is nudging tens of millions of us towards sounding — and perhaps even thinking — alike.
The study, published in the journal Trends in Cognitive Sciences, is among the first to pull together evidence on the effects of widespread LLM use. It suggests we are entering an age of “homogenisation” and “cognitive flattening” where humanity risks becoming more predictable and less imaginative.
• Inside the power-hungry data centres taking over Britain
According to Zhivar Sourati, a computer scientist at the University of Southern California and one of the paper’s authors, some of the clearest signs appear in writing.
The “tells” of AI-edited prose can be easy to spot, often involving an overuse of dashes and a fondness for certain words, such as “quietly”.
Since ChatGPT was released in late 2022, researchers have also noticed hints of writing becoming more uniform. One study of Reddit posts, news articles and academic papers found that a statistical measure of stylistic diversity fell by 20 per cent after the introduction of AI-assisted editing. People who have never met are increasingly using the same tone, vocabulary and complexity of language.
“The stylistic individuality is flattened,” Sourati said.
• Can you AI-proof your career and your children’s future?
The same flattening may extend to imagination. While AI tools appear to make individuals more productive, they may also make groups less inventive overall.
Experiments involving brainstorming illustrate the point. Individuals using an LLM typically generate more ideas than those working alone. But across a group, the ideas produced tend to become less diverse.
There are hints of a similar pattern in science. “The adoption of AI in research appears to present a paradox where individual scientists’ reach expands, but collective scientific exploration contracts,” Sourati said.

Hundreds of millions now interact with the same handful of language models
GETTY
“AI-augmented work tends to move toward areas richest in existing data, automating established fields rather than opening new ones.”
Neuroscience hints at deeper changes. In one recent study, participants’ brain activity was monitored while they wrote essays. Those using ChatGPT showed lower engagement with the task than those writing unaided or using traditional search engines. They also remembered less of what they had written.
• AI bots ‘can pass as humans in online political surveys’
Sourati and his co-authors stress that not all standardisation is bad. Clearer language can, they say, improve communication.
But they fear the costs if diversity of thought is diminished. Research in economics and psychology has shown that groups made up of people who think differently often outperform groups of highly capable individuals who all approach problems in the same way.
“Plato worried that writing would weaken memory, and more recently people raised concerns about the internet offloading knowledge externally,” Sourati said.
“And those concerns weren’t entirely unfounded. But with each of those earlier technologies, people still had to actively [absorb information] and apply what they encountered. You read a book, you search the internet, but you are still doing the cognitive work of processing, interpreting, and integrating it in your own way. What is different with LLMs is that they generate the reasoning and the articulation for you.”
Hundreds of millions now interact with the same handful of language models. “This is a fundamentally different kind of cognitive extension,” he added. “One where the boundary between external tool and internal thought becomes genuinely blurred.”