A report out in December expressed concerns that Ireland is “lagging the rest of the world” when it comes to the use of AI at work. Daily use of generative AI stands at about 10 per cent in Ireland, compared to 14 per cent globally. The authors worry that those who are not early adopters will get left behind.
I don’t think there’s much reason to panic. I suspect the real problem is not that we are using AI too little, but that we are using it badly. There are plenty of worthwhile tasks it can take care of – cancer detection, recipe ideas or holiday planning. If only that was how people were mostly using it.
A hearing at the Workplace Relations Commission this week offered an example of a less than optimal use case. A human resources worker taking a case against her employer made an impressive 16 submissions, many referencing past rulings. Lawyers for her former employer were unable to find evidence of some of these rulings in real life, and wondered if they had perhaps been “hallucinated” by AI. The adjudicator asked if she had been getting “AI assistance” with her filing.
“Of course. I’m just one person,” the complainant replied.
Hallucinations happen because AI is unable to admit when it doesn’t know something. They are the most egregious manifestation of its pathological people-pleasing, but not the only one. Almost as bad is the distinctive, irritating voice ChatGPT adopts in its writing. It is a tone that is anxious to please, overconfident and simultaneously completely uninvested in the subject matter. Whether it is describing a traumatic life experience or a shocking human rights abuse, it will do so with the emotional register of a McDonald’s server offering you a meal deal. That tone can be reassuring if you’re trying to identify whether the weird rash on your neck is ringworm or skin cancer, but it’s not appropriate to every situation.
A friend recently introduced me to an Instagram account by a user who has a running gag in which he tries to get AI to change the register. “Help, I’m tied to these train tracks,” the user (@husk.irl) will tell an AI chatbot over video. “What should I do?”
“Oh, I see what you’re doing,” the AI replies in one such video, “you’re just kind of lying down on some old tracks. I’m pretty sure you’re just joking around, but just to be on the safe side, it looks like these tracks are pretty old and not in use.”
“The train’s coming,” @husk.irl says. “Well, it looks like you’re totally safe,” the AI replies cheerfully. “Let me know if you need help with anything else.”
Worse than its terminal state of detachment, though, is what it has done to perfectly benign and serviceable words like “delve”, “unpack”, “crucial”, “discourse”, and “landscape”. As Sam Kriss put it in a brilliant essay in the New York Times recently: “Entirely ordinary words, like ‘tapestry,’ which has been innocently describing a kind of vertical carpet for more than 500 years, make me suddenly tense.”
Plenty of people suffer from a form of that phenomenon Kriss identifies – the sudden state of alertness when you come across, say, an em dash, random italics for emphasis or a three-adjective phrase (AI loves things that come in threes). I used to find doomscrolling on Instagram relaxing. Now it’s fraught with too many opportunities for untrammelled fury. “It’s not just a scent, it’s a fragrance crafted to defy convention,” a caption under a perfume ad will say, and instead of reaching for my wallet, I want to slam my phone against a hard surface.
I decided to ask an expert to help define what it is that is so irritating about ChatGPT’s tone. ChatGPT itself suggested that it may be guilty of overreliance on the “responsible adult” voice and the inconclusive “no-one-gets-hurt ending”.
Of course, the ‘no-one-gets-hurt ending’ is a hallucination in itself. We may have had no choice about handing over every word ever printed to AI models, but many of us are now compounding the injury by wilfully outsourcing to it our ability to write, to think, to figure out what we want to say.
AI optimists insist that eventually we’ll no longer be able to spot the difference between human and AI voices, and by then we won’t care. It seems more likely that we’ll all settle into some form of miserable state of hypervigilance. As Kriss writes, “it’s becoming an increasingly wretched life. You can experience it too”.
When you become sufficiently obsessed, you start to see the hand of AI everywhere. You might pick up Samantha Harvey’s Booker Prize-winning novel Orbital, for which she painstakingly conducted real-life interviews with astronauts. You are as sure as you can be that it was written without any assistance from AI. And yet you feel that tense prickling on the back of your neck when you read a sentence like: “There’s something so crisp and clear and purposeful about the earth by night, its thick embroidered urban tapestries.”
[ AI blunders in 2025: From false news alerts to non-existent court casesOpens in new window ]
And that is the nub of the issue. Harvey did not use ChatGPT, but it is, unfortunately very likely the case that ChatGPT used Harvey.
When I feel too depressed about all of this, I rewatch a video I came across on social media in which a user opens ChatGPT on two devices and invites them to have a conversation. “I’m totally up for some spontaneity,” one ChatGPT says to the other.
“I’m ready for some spontaneity whenever you are,” the other replies. “Sounds like a plan,” the first goes.
And on and on they continue in a relentless death spiral of lighthearted fun and spontaneity that keeps promising to get off the ground any second, and never quite does – which feels like as apt a metaphor as any for our relationship with AI.