Call me crazy but I think it will be a big mistake if we stop reading and writing. You’re possibly thinking: “Reading. Ugh. I can’t believe I was ever tricked into doing that. And – writing. Double Ugh. What am I, a medieval monk?”
But bear with me. Writing was the first information technology. Literacy has already been damaged hugely by the addictive nature of algorithmic social media. People aren’t reading as much as they used to. Researchers at the University of Florida and University College London recently found that there had been a 40 per cent drop in Americans who read for pleasure from 2004 to 2023. There’s increasing evidence from teachers and lecturers that many young people struggle when assigned a whole book to read. An OECD study published last year found only 9 per cent of Irish adults surveyed were able to properly parse and analyse long, dense texts.
Now, having degraded our ability to read, the tech giants behind generative AI services such as ChatGPT, Claude and Gemini also think we should be writing less. There’s a big push to use LLMs (large language models, the most commonly used form of AI) in the workplace and here are some of the things that AI trainers are suggesting office workers use them for: writing emails, writing first drafts, editing last drafts, synopsising the reports and emails of others, creating PowerPoint presentations, “brainstorming”. You know, thinking.
The general implication of all this is that much of the writing we do is an awful waste of our time. Socrates would be pleased. He never liked the idea of literacy in the first place. He thought it was bad for the memory. He felt we should carry all our wisdom in our heads and not be storing it on those newfangled papyrus scrolls (that said, I only know that about Socrates because Plato wrote it down). Look, I’m biased. I’m a writer. I enjoy reading and I enjoy writing. I even enjoy writing emails. And because I do this for a living (for now), I’ve also thought a lot about what happens when someone writes.
The thinking behind AI-related work in general seems to be built around a key misunderstanding. AI boosters seem to think that when people write they are directly transferring thoughts and ideas that exist pristinely in their heads directly into language. They see the effort of writing as something that gets in the way of this pure process. They see writing as a transparent process of transference, one that can be mimicked by giving a few instructions to a machine.
[ Artificial intelligence report should be wake-up call for CoalitionOpens in new window ]
Maybe that’s true for a tiny subset of geniuses but this is not what’s happening in most cases. For most people, writing is a transformative activity and the effort is the point. Even if we feel we have a clear perspective in our heads, when we try put it on paper we generally realise our great idea is just a bunch of slightly contextless impressions, impulses, factoids and biases and they don’t always cohere in the ways we imagined when they were unwritten. It’s a mess, essentially. Writing forces people to resolve internal contradictions and confront their own bullshit. It’s why it’s hard. It’s why it’s beautiful. It’s why it leads to insight. People write to communicate but also to figure out what they think.
Asking the machine to write a first draft based on some bullet points is asking the machine to resolve your contradictions for you. First drafts are where most of the hard thinking happens. It’s the scaffolding for everything that follows. If the machine works out the connections for you, it is locking you into pre-existing frameworks of thought worked out by someone else. You are basically asking it to think in your place.
Similar things happen if you ask it to read for you. If you use the AI to synopsise documents instead of reading them it will miss important connections your very particular brain will make by engaging with the material directly. Deep reading one document, I really believe, will be more useful to you than speed reading AI synopses of 100 documents.
Each time you struggle to explain your reasoning in an email or a report or read someone else’s emails or reports you refine your thinking. It doesn’t matter if an AI written document is technically “better” than one you might write yourself. The point of writing is to think and the document is then proof of thought.
We will soon be drowning in reams of documentation that nobody is properly reading and nobody is properly writing. Illustration: iStock
Something problematic also happens when you use the machine to “brainstorm.” Because the LLMs are sycophantic by design they tend to reflect the user’s world view back at them, just in heightened language. It reinforces biases and existing world views and then compliments the user into feeling like an insightful genius. Yes, I know that’s probably true when it comes to you, but think of your dumbest and laziest colleague. The machine also makes him feel like an insightful genius.
Consequently, using these super supportive LLM chatbots is an addictive process. There’s evidence that their unpredictability functions, psychologically, like a slot machine and that prompting an AI system gives people a dopamine hit much like gambling. In a recent survey of 160,000 workers, it was found that those who used AI were spending a lot more time on corporate busywork and were doing 9 per cent fewer complex, more focused problem-solving tasks than those who weren’t using AI.
Goldman Sachs released a report last month that found that despite vast investment and uptake of AI tools there had been no increase in productivity beyond two specific areas (software development and customer support). I suspect the reason for this is that for many people these machines become perpetual procrastination engines that create the illusion of productivity. They expand documents from bullet points and then the recipients turn the documents we created back into synopsised bullet points, as the cartoonist Tom Fishburne has famously joked. They send more emails. They ask AI agents to synopsise the emails they are sent instead of reading them.
We will soon be drowning in reams of documentation that nobody is properly reading and nobody is properly writing and those will sit in our servers, rife with unchecked, possibly hallucinated information. The writer Cory Doctorow likens it to asbestos in the walls of every business.
There is also increasing evidence that working with AI a lot leads to a “cognitive debt”. That’s fancy terminology for: it makes us dumber. Studies suggest that while using LLMs can speed up work, the user’s understanding of that work is degraded and their ability to analyse and breakdown the process after the fact is compromised. Meanwhile their confidence in doing the work independently is undermined by the relative fluency of the machine. (For the record, I believe your clumsiest thoughts are worth more than the machine’s fluency; there are plenty of great writers who couldn’t punctuate to save their lives. AI is faster than you. It’s not better.)
I suspect that far from AI early adopters being given an advantage, it’s actually people who keep their thought processes clear of AI who will do best in the coming dystopia. They’re the people who will eventually be hired by corporations to sift through all the accumulated digital garbage because their AI-addicted employees will no longer be able to read or write properly. Many people I speak to who use chatbots at work tell me they worry about its effect on how they think (this is why in a recent Ezra Klein podcast, Anthropic co-founder Jack Clark said he plans to encourage his children to keep journals). It reminds me, already, of how people began talking about social media after the first flush of enthusiasm had passed.
AI use will make us all stupider and it will make us feel more alone. Use your words, people
Once again the tech companies are stealing from us. They’ve already stolen the calm that comes with reading a book and replacing it with the agitation of an ever-changing algorithmic feed. Now they are stealing our thoughts and selling them back to us via LLM technology and they are, ultimately, stealing our ability to engage with our work and our colleagues and our ability to think.
Socially there’s a kind of moral double think going on. There have been scandals already involving AI written articles and AI written novels. (The publisher Hachette recently pulled a book called Shy Girl over concerns that it was at least partially written with the help of AI.) People see this sort of use of the tech as potentially plagiaristic and fraudulent and substandard and inhuman.
It’s strange to me that we feel it’s wrong to use AI in these circumstances but that it’s okay to use it for reports and emails and other things to which we put our names. I suspect it’s because, in our hearts, many of us know that a lot of this corporate busywork work probably shouldn’t exist in the first place. However, instead of using this as an opportunity to reform the workplace, a lucrative conspiracy of silence means that bureaucratic bullshit perpetuates itself instead.
I don’t think we should be so eager to give up opportunities to think or to communicate. I have a catastrophic mindset. It’s not hard for me to see a world after most people have become “post-literate” becoming over time “post-verbal.” I mean, why not? If friction-averse, intimacy-fearful young people become used to outsourcing their thoughts in emails and texts why not simply have their AI agents speak for them too? That’s just one step further, really.
[ Are we becoming a post-literate society?Opens in new window ]
We need to keep reading and writing, even for the “boring” stuff. LLMs do not think and they do not connect though they do a good impression of both. They are ultimately very sophisticated pieces of sentence completion technology. They are good at grammar and syntax, but they’re pretty mediocre writers. As well as being prone to errors and hallucinations and contradictions and repetitions, LLMs have no theory of mind so can’t imagine and empathise with a reader and organise information accordingly.
To paraphrase Eric Morecambe, they say all the right words, but not necessarily in the right order. And language is ultimately how we build community with one another. A world of LLM communication is a colder, blander one devoid of incidental human empathy and original thought. AI use will make us all stupider and it will make us feel more alone. Use your words, people. At the end of the day, they’re all we’ve got.