A first person account of training AI for tech giants.

For 18 months, until September this year, I was a gig worker for a large data annotation company – one human among tens of thousands globally who train AI for tech giants.

We were called “taskers”, “contributors”, or freelancers. That third label was slippery: in many ways we acted and were treated like employees, but we didn’t get the same protections as employees. Wired and other publications have begun reporting on exploitative labour practices at AI companies. The stories keep coming, and so do the lawsuits.

A know-your-enemy impulse prompted me to apply. In early 2024, I was horrified and fascinated by generative AI – at its queasy-making attempts at art and the hollow positivity of chatbots. There was also its hungry disregard for copyright, privacy and data sovereignty; the sense it was “coming for our jobs”; and – in the media – doom-filled prophecies of super-intelligent AI escaping human control. (Even the billionaires developing AI had trumpeted that risk.). 

It began with a mass recruitment message in my LinkedIn inbox: Want well-paid flexible work? The company was looking for postgraduates to train leading-edge AI models, and my profile was a great fit!

I thought, why not poke the beast and see what happens?

I sent them my CV and soon received: “Congratulations … you’ve been selected to participate in our advanced generalists program!” I was to verify my ID, watch an onboarding video, then “join our paid Enablement Program to set you up for project success!”

More onboardings and thrilled automated emails of congratulations followed. There were Slack channels to join, and long google docs to read about how to rate AI models’ responses across categories like accuracy, instruction following, writing style and safety. Lastly, an online assessment. 

I got US $350 (NZ $550–$600) for the onboarding, which took seven or eight hours all up – similar to an average freelance editor’s rate. 

My memories of my first days – spent staring into a screenful of Slack channels – have transmuted into something physical: I was milling around at some kind of huge, noisy station, with hundreds of others, all loudly confused about what to do and where to go. Then suddenly I was herded to a place where about 30 others were working and conversing more calmly. I’d been chosen, for some reason, to join “Dana’s squad”.

Our first project involved each trawling the internet for public-domain images, then thinking up prompts that would require a chatbot to interpret and reason about them. For example, you could ask for stats to be pulled from a timeline or a graph and then compared, or choose a simple puzzle, then ask for step-by-step instructions to solve it, or you could find a royal family tree and ask for relationships between particular members.

Lastly, we wrote our own ideal chatbot response to the prompt. This constituted a “task”. Sometimes you were asked for single-turn tasks (one prompt and response), and sometimes for multi-turn. Finding an image for a multi-turn task was often challenging – it needed enough detail to support two or three completely different questions, each building on the last without being repetitive.

Every task was reviewed and audited before going to the anonymous client who used it to train their AI model.

As soon as we began any task on the company platform, a timer ticked. I recall we initially had up to three hours for multi-turn tasks, but when you factored in finding the image, it was tight. Some of us started finding images first, unpaid.

Sometimes I’d hit submit with less than a minute to go, sweating, my heart racing. I wasn’t the only one. If you timed out before submitting (which sometimes happened simply because of a platform glitch), you could lose hours of work and wouldn’t be paid for it. At the same time, if you rushed a task and submitted “low quality”, you’d get bad reviewer scores. The company kept close track of our quality and speed metrics, and we were threatened with getting kicked off projects if they got too low.

Dana once told me over Zoom – reluctantly – that although my task quality was high, my speed was being looked on unfavourably. I wasn’t supposed to use all the available time. How time metrics worked or what I should work towards, I never found out.

We felt lucky to have Dana, though. She was genuinely lovely. She explained instructions as clearly as she could, facilitated conversations with empathy. We had regular calls with her, and her DMs were always open. She got us feeling like a team. 

We also banded together in our own private chats, trying to cope with the chaos. Instructions constantly changed – sometimes drastically, always without warning. Though Dana was frank, she was only one rung above us in the hierarchy, and often as in the dark as we were. We only knew what the next person up told us, and it seemed to be a vertical hierarchy, with blindsides all the way down.

One day, our hourly rates suddenly dropped from US $40 to $35. Another, the company shortened the time we got to do tasks at full rate, and gave us some extra time to complete them at barely minimum wage. 

Then, one morning, just two months after I’d joined, I woke to a message from Ruth, my best mate there: “Dana’s been furloughed!”

Dana’s Slack account had been deactivated, but someone had her phone number. We texted messages of support, and Ruth rallied us to contact the helpdesk asking the company to bring her back. 

Of course, we were powerless. This is how it worked. Out with the old, in with the new. Affected people often believed their jobs were safe until the day they were let go. I still have a reply from Dana, saying she looked forward to finding work at a company that treated workers decently.

Over the following months, I got shifted from team to team, project to project. Promises that projects would last a few weeks were constantly broken – work I’d counted on could vanish within days of such an assurance, the project suspended indefinitely. My pay (like most people’s I knew) dropped by $5 increments until it was $25 an hour for a set number of minutes, then around $14 an hour for another before you timed out. On many projects, company expectations meant you had no option but to continue on the lower rate, do unpaid prep, submit a low-quality task, or all three. And increasingly, training for new projects was unpaid – sometimes hours of it. 

I knew I needed to leave, but I’d become dependent on the work. That’s the thing: so many people I’ve met who do this work really need it.

Late one night, I was undertaking yet another unpaid training, and suddenly, I couldn’t do it any more. My brain and body wouldn’t let me even read the instructions. So I stopped – a luxury not everyone has.

I’m still feeling the loss of the income, and some weeks I struggle, because yes, generative AI – and the general employment climate – is coming for my bread-and-butter editing work. But I’m free of the stress of working for this particular company, and of knowing I’m so directly contributing to the way the biggest AI companies are causing harm.

Over 18 months, I saw that it isn’t AI we should fear, but the handful of billionaires who control it and foist it on us at every turn. If anyone thinks these companies have humanity’s best interests at heart, I can tell you first-hand that their treatment of workers (not to mention the environment) shows the opposite. 

Rhetoric about the threat of AI itself is a misdirection. As author Karen Hao writes, the big AI companies are becoming the new empires. They want us to perceive AI as an unstoppable force – one we must urgently “embrace” or be “left behind”.

Elon Musk, Sam Altman (CEO of Open AI, which makes ChatGPT), and other tech leaders blithely predict swathes of job losses while painting an idyllic picture of a future world without work. But many people are feeling the troubling effects of AI on their livelihoods right now. It’s musical chairs. As jobs become fewer, we’re told we must adapt well to AI to keep a seat. And what of everyone else? Some commentators see AI as, ultimately, a wage depression tool, widening the wealth gap further.

Is all this really inevitable? Increasingly, people are fighting back, for example, documentingworker stories and protests, resisting locally devastating data centres, trying to reshape AI policy in humane ways, and more. 

 But, in the face of the enormous wealth, political power and selfishness in Silicon Valley, nothing will change unless we believe it can.