Illustration by Roy Scott / Ikon Images

About three years ago, I started noticing that the grammar and spelling of my first-year students was markedly improving. “Oh good,” I said to myself, “they’re finally listening to my advice – always run a spell check before submitting your essay and if you’re unsure about correct usage, Grammarly is a useful tool.” Something of a tech dinosaur, I was not aware that a software tool that had begun as simply a corrective grammar checker had metamorphosed into a full-scale AI-powered writing assistant. And I had heard only vague rumours of vast datasets known as LLMs and a new thing called ChatGPT that was being rapidly adopted around the world and touted as the first major step towards “generative artificial intelligence.”

My employer, Arizona State University, which prides itself on being consistently rated “No. 1 in the USA for Innovation” soon set me to rights. In January 2024, it became the first higher education institution to partner actively with OpenAI, the makers of ChatGPT. We were given access to the latest version, which was not yet available to the wider public. And, importantly, every time we logged on there was a tiny assurance at the foot of the page: “ChatGPT can make mistakes. OpenAI doesn’t use Arizona State University workspace data to train its models.” Furthermore, a decision was quickly made by senior management that there was no use pretending that LLMs didn’t exist or that students wouldn’t use them as shortcuts, so the best approach was to incorporate into our courses teaching about their capacities and shortcomings. I began to have great fun getting students to create AI-generated projects and then criticising them in the classroom.

I also offered a few lessons on the pitfalls of using ChatGPT as a research tool. My very first attempt to do this myself was instructive. I was working on a paper about the representation of the extractive industries in the fiction of F Scott Fitzgerald (“The Diamond as Big as the Ritz” and the dustheap in The Great Gatsby) and I came across a reference in correspondence between Scott and Zelda to the magic fire music in Wagner’s Ring cycle – which George Bernard Shaw had interpreted as an allegory of 19th-century extractive capitalism. Wondering whether the Fitzgeralds had heard it on their gramophone, I asked Chat when the first recording was made. Having initially confused the magic fire music with the Ride of the Valkyries, it told me that a performance by the Leipzig Gewandhaus Orchestra conducted by Richard Strauss in February 1889 had been recorded onto wax discs. Wow, I thought. Strauss conducting Wagner! Never knew that – better have a quick check. And of course, despite the precision, it proved to be a total hallucination.

That said, LLMs are useful research tools – when I was completing my latest book, a global history of the garden in culture, art and literature, I wanted to find out about the history of the wheelbarrow. Using Claude AI as a glorified Google, I garnered 300 references within a few seconds, and a reasonable number of them were genuine. But if you do this, you must always check that the result is not a hallucination, I still have to stress to my students.

Subscribe to the New Statesman today and save 75%

A lesson, it seems, that a sometime university professor turned right-wing cheerleader appears not to have heeded. By relying on ChatGPT for his new book, the Reform party’s “intellectual guru” Matt Goodwin – or MattGPT as he will henceforth ever be known – has destroyed such credibility as he still had after losing the Gorton and Denton by-election. This was only the most embarrassing example of unacknowledged AI usage to have been unearthed in recent days. The publisher Hachette has pulled author Mia Ballard’s horror novel Shy Girl because it was detected as 78 per cent AI-generated (J. G. Ballard, the great literary prophet of our techno-saturated condition, will be smiling wryly in his grave at the undoing of his young namesake). And the Atlantic has published an article about how “Artificial intelligence seems to be turning up, undisclosed, in the opinion pages of major news publications,” citing an example from the much-read “Modern Love” column in, of all places, the New York Times. Meanwhile, in schools and on campuses around the world faculty are wrestling with the question of what to do about student usage.

At every level from school to postgraduate, students are getting the LLMs to do their thinking and writing for them. This semester I decided to confront the problem directly. I included a rubric with every written assignment:

Use of AI:

Generative Artificial Intelligence via Large Language Models (LLMs) such as ChatGPT and Claude AI is here to stay. It offers valuable tools for both research and writing, BUT its research is often unreliable, riddled with “hallucinations,” meaning that everything it says should be checked against primary sources, and its writing, though helpful as an editing tool, should never substitute for your own words – especially in a Humanities course, where developing the art of writing well and crafting a critical argument are core skills. Therefore, FOR EACH OF THE FOUR PRINCIPAL ASSIGNMENTS IN THIS COURSE, please include one of the following statements at the end of your assignment. Failure to include such a statement will be penalised in your grade. There are five options:

1. I did not use AI at all in preparing this assignment.

2. I used AI for my research, but not for my writing.

3. I drafted my work, then edited it with the assistance of AI.

4. AI wrote the draft of my work, then I edited it myself.

5. My work was done entirely by AI.

I will be very happy with 1 and 2 (though in the case of 2, make sure to check for hallucinations), quite happy with 3, not very happy with 4, and furious with 5.

The results, I am happy to report, were very encouraging. Because students were told the rules in advance, they took the hint about avoiding over-reliance on an artificial intelligence as opposed to their own. The AI usage statements were in almost all cases honest and students who shunned AI took pride in their work. There were also thoughtful requests for clarification: “Does using Grammarly count as having AI edit your words?” To which my response was, “Good question – when Grammarly is simply a spelling and grammar check, that does not, for me, class as AI usage. But if you are using the more advanced version that does significant rewriting, you should tick the box saying that you used AI as an editor.”

Administrators and some faculty – including me – are taking the view that in a future where so many graduate jobs will be taken by, or be dependent upon, AI, the pragmatic approach is to work with our pupils on approaches that use the new technology smartly and honestly – just as in a previous generation they had to learn how to use word processors, then Google searches and Matlab applications. But I sense that students themselves are revolting. One of my sons, a first-year student at Durham, says: “I’m shocked that you allow its use at all, Dad, given the cost to the planet of all those data centres. I refuse to touch it I because I’ve gone to university to think for myself. And my friends are the same.” An excoriating editorial in the University of Pennsylvania newspaper makes the case against with great eloquence:

“In 2024, Penn made history by becoming the first Ivy League school to launch a major in artificial intelligence. Gutting its systems engineering major, the University justified the replacement as a program that would “fit the AI-powered needs of the 21st century.” Since then, AI has become intertwined with obtaining a Penn education in completely unprecedented – and potentially dangerous – ways. Our AI course offerings have exploded. Penn now offers 10 undergraduate programs, 21 graduate programs, and eight doctoral programs in the field. But as this pattern speeds up, Penn’s commitment to AI innovation seems less like an enhancement to our learning and more like a detriment to our critical thinking abilities. In its tireless support for AI, the University has essentially endorsed shortcuts and the outsourcing of academic thinking, threatening the very freedom of inquiry and open expression it claims to promote…

“The irony is that as Penn pours endless money and energy into AI advancement in its attempt to get ahead, the University is only quickening its own demise. AI cannot coexist with education – it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought. With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”

The California court case in which Meta and YouTube have been found liable for inflicting mental harm via their addictive algorithms may prove to be the pivot that gives force to the backlash against social media. If some of my students’ comments are anything to go by, I suspect that something similar may be about to happen with AI. My favourite riposte was: “I am an artist, and abhor AI. Even if I write terribly, I’d rather it be my own writing. Any research found was taken from JSTOR.”

[Further reading: Parents must also take responsibility for online safety]

Content from our partners