Erin Harrington unpacks the frustration she felt at a Word Christchurch event aiming to demystify generative AI.
Literary festival sessions are beautiful things. They can be thought provoking and informative. They can fill your heart. It’s not often that they leave you climbing the walls.
I spent much of last week neck deep in the wonderful Word Christchurch, and my last session was Dr Jo Cribb, with Susie Ferguson, talking about Cribb’s book, co-authored with David Glover, Don’t Worry about the Robots: How to Survive and Thrive in the New World of Work. Cribb is a respected business consultant, strategist and policy expert with a particular interest in gender equity. Here she was looking to demystify generative AI, using accessible metaphors to describe large language models (like a chef, making new things from pre-existing materials), and encouraging newbies to look fear in the face and give ChatGPT a go.
Susie Ferguson is a great interviewer and Jo Cribb is a generous and clear communicator, with an obvious commitment to making tricky concepts accessible for everyday people. Their conversation was a good primer, very vanilla, straight down the middle. What it wasn’t was an engagement with the technological and ethical complexity of GenAI, an issue in which there is no neutral ground. Within minutes I was bouncing in my seat in a state of agitation.
After the session I was so frustrated I vented in an Instagram story in the hope that others would agree with me and I’d feel better (always soothing, rarely effective). Fury wore off around lunchtime the next day, and I started to feel more circumspect – that I was maybe being a bit unfair.
Don’t Worry About the Robots is aimed at people anxious about the future of work in the age of technological change. Its 2024 iteration updates the 2018 original in light of the pandemic and the release of ChatGPT by OpenAI in late 2022. It offers evergreen advice about how to deal with change, tailored to a local audience, and supported by industry interviews: be proactive; stay abreast of change; learn new stuff; have a crack. It’s an optimistic growth mindset approach that will, ideally, keep you feeling grounded while the world shifts around you. Cribb did a good job of framing debates without overtly taking a position – although I’m sure she has one.
Susie Ferguson (left) and Jo Cribb (right) on stage at Word Christchurch. Photo: Erin Harrington
But I’m not sure that this approach was going down well in the moment, in the context of a festival devoted to creativity, ideas and stories. This was a strong session, in that it challenged the audience and introduced many to a new topic, and I think that’s savvy programming. But you could hear frustration in prickly questions from some members of the audience, many of whom were creatives, who wanted a more direct engagement with the assumptions underpinning the topic. Questions variously touched upon issues of bias, concerns about what AI is doing to young people’s brains, anger at the way large language models have been trained by hoovering up creatives’ work without consent or recompense, and worries that AI is incentivising us away from human interaction.
My visceral reaction to this particular framing, which certainly isn’t unique to this book, is that positioning generative AI as a disruptor is like calling the meteorite that wiped out the dinosaurs a “wee rock”. Disruptive technologies like the printing press, mass media and the internet have each reconfigured the social order, and our perceptions of time and space, particularly through their relationship to information. The GenAI “gold rush” goes one step further, offering helpful digital assistants and shaping our media diet while directly challenging our sense of meaning, identity and reality.
In the very few years since GenAI moved out of the lab and into widespread (and commercialised) public use through accessible natural language processing and multimodal models we’ve been racing down a dual track that is part miraculous vision for the future, part ethical nightmare.
It’s worth remembering that these tools are not neutral. The chatbots, apps, content creation tools and digital assistants most present in people’s everyday lives – and those most highly commercialised – were mostly created by a small clutch of American companies headed by digital oligarchs with a borderline religious belief in tech, who operate well beyond the reach of governments. It’s well documented that many are more interested in dreams of immortality, mega-warfare, alpha masculinity and hooning around the solar system (where there will be “completely new, exciting, super well-paid, super interesting job[s]”) than in confronting the many, many problems facing the planet right now. They overstate the benefits and often dismiss the risks. Generative AI is nominally positioned as addressing the energy crisis, the climate crisis, the misinformation crisis, privacy and information crises, and very many geopolitical crises, even while it directly contributes to many of them.
Soothing noises about checks and balances from tech companies – particularly regarding personal safety, ethics, bias, extremism and democratic protections – should be treated with the suspicion they deserve. The tech companies’ goals are not ours. They do what they want, and do not deserve our trust. And our government is doing a piss poor job of staying abreast of this. Just this week, a group of New Zealand lawyers penned an open letter acknowledging AI’s opportunities, but demanding that the government move on regulation, for a horrifying list of reasons much longer than my word count allows. Generative AI seems a bit like one of those chimpanzees that rich people would adopt – all fun and games, until it rips your face off.
AI models can be used in remarkable ways to discover new cancer drugs and develop new diagnostic tools, to support decision-making and innovation, and do things formerly impossible. They can even just help businesses with small staff and tight margins (say, arts organisations) do some of the admin drudge so that people can be freed up to do their actual jobs – so long as the information being offered to those models doesn’t get shared or sold off.
All good. But what I find most unsettling about the current evangelical hype about the GenAI bubble is the disdain with which many in tech hold “the human” – something essential in the arts in particular.
The question with all of these tools is: what problem is this trying to solve? It’s not really comma splices, poorly phrased emails or the need for some quick uncanny valley images for our e-commerce insta. The real answer: people. (And often, not wanting to pay or deal with people.) I push back on the idea that the GenAI conversation is just about tools and tech as “co-workers” – technology as extension and prosthesis, noting one person’s valid use is another’s epic waste of kilowatts.
My macro concern – beyond the enshittification of everything – is that humans are framed as slow, inefficient meat sacks crying out for optimisation.
Like the futurist transhumanists of the 1990s and 2000s, many GenAI companies frame people as inherently deficient. It’s telling that the ChatGPT add-in that keeps trying to install itself in my word processor (which is already riddled with GenAI) offers “Your voice. Just better.” (Note: “better” is code for bland and corporate.) You don’t need to work on messy human relationships, you can have AI friends or get GenAI to do your job interview for you.
I work at a university, and see education being a very human, relational endeavour. Nonetheless, the most vehement, metric-led GenAI edu-topians seem to want to solve the “inefficient people” problem by getting us to research and teach using AI, write our assessments with AI, then get students to submit work produced by AI after being tutored by AI, before having AI models grade it all, leaving us mere smooth-brained mortals to, I don’t know, go to the pub?
The digital revolution promised a paperless office and two day weeks, but we’re working harder than ever. Ten dollars says this all just makes more work.
Don’t Worry About the Robots emphasises that the two most important things to consider right now are values and choice. The former I wholeheartedly agree with. This is a reckoning, and that takes introspection. But GenAI is permeating just about everything we encounter, largely without our knowledge, control, or even consent. When the current climate is “get on board or get left behind”, I question what choices we have, and under what conditions.
I really hate feeling like a doomer and a refusenik, especially about technology. I’ve generally been an early adopter, and I usually think the glass is half full. But right now, between GenAI and the damaging impact of opaque algorithms on everyday life, it feels like we’re the band on the Titanic, and unless something changes quickly our only real choice is which instrument to play as the iceberg bears down.