{"id":153866,"date":"2025-11-22T14:45:07","date_gmt":"2025-11-22T14:45:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/153866\/"},"modified":"2025-11-22T14:45:07","modified_gmt":"2025-11-22T14:45:07","slug":"meet-the-ai-workers-who-tell-their-friends-and-family-to-stay-away-from-ai-artificial-intelligence-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/153866\/","title":{"rendered":"Meet the AI workers who tell their friends and family to stay away from AI | Artificial intelligence (AI)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of <a href=\"https:\/\/www.theguardian.com\/technology\/artificialintelligenceai\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a>. As an AI worker on Amazon Mechanical Turk \u2013 a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output \u2013 Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.<\/p>\n<p class=\"dcr-130mj7b\">Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not. When she was presented with a tweet that read \u201cListen to that mooncricket sing\u201d, she almost clicked on the \u201cno\u201d button before deciding to check the meaning of the word \u201cmooncricket\u201d, which, to her surprise, was a racial slur against Black Americans.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI sat there considering how many times I may have made the same mistake and not caught myself,\u201d said Pawloski.<\/p>\n<p class=\"dcr-130mj7b\">The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral. How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?<\/p>\n<p class=\"dcr-130mj7b\">After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt\u2019s an absolute no in my house,\u201d said Pawloski, referring to how she doesn\u2019t let her teenage daughter use tools like <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a>. And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is. Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what she\u2019s doing could be used to hurt people \u2013 many times, she says, the answer is yes.<\/p>\n<p class=\"dcr-130mj7b\">A statement from <a href=\"https:\/\/www.theguardian.com\/technology\/amazon\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Amazon<\/a> said that workers can choose which tasks to complete at their discretion and review a task\u2019s details before accepting it. Requesters set the specifics of any given task, such as allotted time, pay and instruction levels, according to Amazon.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAmazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs,\u201d said Montana MacLachlan, an Amazon spokesperson.<\/p>\n<p class=\"dcr-130mj7b\">Pawloski isn\u2019t alone. A dozen <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/sep\/11\/google-gemini-ai-training-humans\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">AI raters<\/a>, workers who check an AI\u2019s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all \u2013 or at least trying to educate their loved ones on using it cautiously. These trainers work on a range of AI models \u2013 Google\u2019s Gemini, Elon Musk\u2019s Grok, other popular models, and several smaller or lesser-known bots.<\/p>\n<p class=\"dcr-130mj7b\">One worker, an AI rater with <a href=\"https:\/\/www.theguardian.com\/technology\/google\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> who evaluates the responses generated by Google Search\u2019s AI Overviews, said that she tries to use AI as sparingly as possible, if at all. The company\u2019s approach to AI-generated responses to questions of health, in particular, gave her pause, she said, requesting anonymity for fear of professional reprisal. She said she observed her colleagues evaluating AI-generated responses to medical matters uncritically and was tasked with evaluating such questions herself, despite a lack of medical training.<\/p>\n<p class=\"dcr-130mj7b\">At home, she has forbidden her 10-year-old daughter from using chatbots. \u201cShe has to learn critical thinking skills first or she won\u2019t be able to tell if the output is any good,\u201d the rater said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cRatings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models,\u201d a statement from Google reads. \u201cWe also have a range of strong protections in place to surface high quality information across our products.\u201d<\/p>\n<p>Bot watchers sound the alarm<\/p>\n<p class=\"dcr-130mj7b\">These people are part of a global workforce of tens of thousands who help chatbots sound more human. When checking AI responses, they also try their best to ensure that a chatbot doesn\u2019t spout inaccurate or harmful information.<\/p>\n<p class=\"dcr-130mj7b\">When the people who make AI seem trustworthy are those who trust it the least, however, experts believe it signals a much larger issue.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,\u201d said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program. \u201cSo this means when we see the final [version of the] chatbot, we can expect the same type of errors they\u2019re experiencing. It does not bode well for a public that is increasingly going to LLMs for news and information.\u201d<\/p>\n<p class=\"dcr-130mj7b\">AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality. Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn\u2019t mistrust generative AI as a concept, she also doesn\u2019t trust the companies that develop and deploy these tools. For her, the biggest turning point was realizing how little support the people training these systems receive.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe\u2019re expected to help make the model better, yet we\u2019re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,\u201d said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley\u2019s most popular AI models. \u201cIf workers aren\u2019t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what\u2019s expected of us and what we\u2019re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say. An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta\u2019s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025. At the same time, the chatbots\u2019 likelihood of repeating false information <a href=\"https:\/\/www.newsguardtech.com\/wp-content\/uploads\/2025\/09\/August-2025-One-Year-Progress-Report-3.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">almost doubled from 18% to 35%<\/a>, NewsGuard found. None of the companies responded to NewsGuard\u2019s request for a comment at the time.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI wouldn\u2019t trust any facts [the bot] offers up without checking them myself \u2013 it\u2019s just not reliable,\u201d said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company. She warns people about using it and echoed another rater\u2019s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too. \u201cThis is not an ethical robot. It\u2019s just a robot.\u201d<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-21\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-1xjndtj\">A weekly dive in to how technology is shaping our lives<\/p>\n<p>Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">theguardian.com<\/a> to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-21\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe joke that [chatbots] would be great if we could get them to stop lying,\u201d said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements.<\/p>\n<p>\u2018Garbage in, garbage out\u2019<\/p>\n<p class=\"dcr-130mj7b\">Another AI rater who started his journey rating responses for Google\u2019s products in early 2024 began to feel he couldn\u2019t trust AI around six months into the job. He was tasked with stumping the model \u2013 meaning he had to ask Google\u2019s AI various questions that would expose its limitations or weaknesses. Having a degree in history, this worker asked the model historical questions for the task.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI asked it about the history of the Palestinian people, and it wouldn\u2019t give me an answer no matter how I rephrased the question,\u201d recalled this worker, requesting anonymity, having signed a nondisclosure agreement. \u201cWhen I asked it about the history of Israel, it had no problems giving me a very extensive rundown. We reported it, but nobody seemed to care at Google.\u201d When asked specifically about the situation the rater described, Google did not issue a statement.<\/p>\n<p class=\"dcr-130mj7b\">For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him. \u201cAfter having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,\u201d he said. He used the term \u201cgarbage in, garbage out\u201d, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws.<\/p>\n<p class=\"dcr-130mj7b\">The rater avoids using generative AI and has also \u201cadvised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal\u201d, he said.<\/p>\n<p>Fragile, not futuristic<\/p>\n<p class=\"dcr-130mj7b\">Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic \u2013 explaining the army of invisible workers behind it, the unreliability of the information and how <a href=\"https:\/\/www.theguardian.com\/business\/article\/2024\/jul\/04\/can-the-climate-survive-the-insatiable-energy-demands-of-the-ai-arms-race\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">environmentally damaging it is<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">\u201cOnce you\u2019ve seen how these systems are cobbled together \u2013 the biases, the rushed timelines, the constant compromises \u2013 you stop seeing AI as futuristic and start seeing it as fragile,\u201d said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes. \u201cIn my experience it\u2019s always people who don\u2019t understand AI who are enchanted by it.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The AI workers who spoke to the Guardian said they are taking it upon themselves to make better choices and create awareness around them, particularly emphasizing the idea that AI, in Hansen\u2019s words, \u201cis only as good as what\u2019s put into it, and what\u2019s put into it is not always the best information\u201d. She and Pawloski gave a presentation in May at the Michigan Association of School Boards spring conference. In a room full of school board members and administrators from across the state, they spoke about the ethical and environmental impacts of artificial intelligence, hoping to spark a conversation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cMany attendees were shocked by what they learned, since most had never heard about the human labor or environmental footprint behind AI,\u201d said Hansen. \u201cSome were grateful for the insight, while others were defensive or frustrated, accusing us of being \u2018doom and gloom\u2019 about technology they saw as exciting and full of potential.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Pawloski compares AI ethics to that of the textile industry: when people didn\u2019t know how cheap clothes were made, they were happy to find the best deal and save a few bucks. But as the stories of sweatshops started coming out, consumers had a choice and knew they should be asking questions. She believes it\u2019s the same for AI.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhere does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?\u201d she said. \u201cWe are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence. As&hellip;\n","protected":false},"author":2,"featured_media":153867,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-153866","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/153866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=153866"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/153866\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/153867"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=153866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=153866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=153866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}