{"id":14836,"date":"2025-07-23T01:54:07","date_gmt":"2025-07-23T01:54:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/14836\/"},"modified":"2025-07-23T01:54:07","modified_gmt":"2025-07-23T01:54:07","slug":"hard-labour-conditions-of-online-moderators-directly-affect-how-well-the-internet-is-policed-new-study","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/14836\/","title":{"rendered":"Hard labour conditions of online moderators directly affect how well the internet is policed \u2013 new study"},"content":{"rendered":"<p><a href=\"https:\/\/theconversation.com\/is-big-tech-harming-society-to-find-out-we-need-research-but-its-being-manipulated-by-big-tech-itself-240110\" rel=\"nofollow noopener\" target=\"_blank\">Big tech<\/a> platforms often present content moderation as a seamless, <a href=\"https:\/\/transparency.meta.com\/en-gb\/enforcement\/detecting-violations\/how-enforcement-technology-works\/\" rel=\"nofollow noopener\" target=\"_blank\">tech\u2011driven system<\/a>. But human labour, <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2017\/03\/commercial-content-moderation\/518796\/\" rel=\"nofollow noopener\" target=\"_blank\">often outsourced<\/a> to countries such as India and the Philippines, plays a pivotal role in making judgements that involve understanding context. Technology alone can\u2019t do this.<\/p>\n<p>Behind closed doors, hidden human moderators are tasked with filtering some of the internet\u2019s most harmful material. They often do so with <a href=\"https:\/\/www.theguardian.com\/technology\/2019\/sep\/17\/revealed-catastrophic-effects-working-facebook-moderator\" rel=\"nofollow noopener\" target=\"_blank\">minimal mental health support<\/a> and under strict non-disclosure agreements. <\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/news\/2017\/may\/25\/facebook-moderator-underpaid-overburdened-extreme-content\" rel=\"nofollow noopener\" target=\"_blank\">After receiving vague training<\/a>, moderators are expected to <a href=\"https:\/\/www.wired.com\/2014\/10\/content-moderation\/\" rel=\"nofollow noopener\" target=\"_blank\">make decisions within seconds<\/a>, keeping in mind a platform\u2019s constantly changing content policies and ensuring <a href=\"https:\/\/www.theverge.com\/2019\/2\/25\/18229714\/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona\" rel=\"nofollow noopener\" target=\"_blank\">at least 95% accuracy<\/a>.<\/p>\n<p>Do these working conditions affect moderating decisions? To date, we don\u2019t have much data on this. In a new <a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/14614448251348900\" rel=\"nofollow noopener\" target=\"_blank\">study published in New Media &amp; Society<\/a>, we examined the everyday decision-making process of commercial content moderators in India.<\/p>\n<p>Our results shed light on how the employment conditions of moderators do shape the outcomes of their work \u2013 and three key arguments that emerged from our interviews.<\/p>\n<p>Efficiency over appropriateness<\/p>\n<p>\u201cWould never recommend de-ranking content as it would take time.\u201d<\/p>\n<p>\u2014A 28-year-old audio moderator working for an Indian social media platform<\/p>\n<p>As moderators work under high productivity targets, it compels them to prioritise content that can be handled quickly without drawing attention from supervisors.<\/p>\n<p>In the above excerpt, the moderator explained she avoided content and processes that required more time to maintain her pace. While observing her work over a screen-share session, we noticed that reducing the visibility of content (de-ranking) involved four steps. Meanwhile ending live streams or removing posts required only two steps.<\/p>\n<p>To save time, she skipped the content flagged to be de-ranked. As a result, content marked for reduced visibility, such as impersonations, often remained on the platform until another moderator intervened.<\/p>\n<p>This shows how productivity pressures in the moderation industry easily lead to problematic content staying online.<\/p>\n<p>Decontextualised decisions<\/p>\n<p>\u201cEnsure that none of the highlighted yellow words remained on the profile\u201d<\/p>\n<p>\u2014Instructions received by a text\/image moderator<\/p>\n<p>Moderation work often includes automation tools that can detect certain words in text, transcribe speech, or use image recognition to scan the contents of pictures.<\/p>\n<p>These tools are supposed to assist moderators by flagging potential violations for further judgement that takes context into account. For example, is the potentially offensive language simply a joke, or does it actually violate any policies?<\/p>\n<p>In practice we found that under tight timelines, moderators frequently follow the tools\u2019 cues mechanically rather than exercising independent judgement.<\/p>\n<p>The quoted moderator above described instructions from her supervisor to simply remove text detected by the software. During a screen-share, we observed her removing flagged words without evaluating the context.<\/p>\n<p>Often the automation tools that queue content and organise it for human moderators will also detach it from the broader conversational context. This makes it even harder for the moderator to make a context-based judgement on content that gets flagged but was actually innocent \u2013 despite that judgement being one of the reasons human moderators are hired in the first place.<\/p>\n<p>Impossibility of thorough judgements<\/p>\n<p>\u201cIf you guys can\u2019t do the work and complete the targets, you may leave\u201d<\/p>\n<p>\u2014Work group message of a freelance content moderator<\/p>\n<p>Precarious employment compels moderators to mould their decision\u2011making processes around job security. <\/p>\n<p>They are compelled to use strategies that allow them to decide quickly and appropriately. In turn, this influences their future decisions.<\/p>\n<p>For instance, we found that over time, moderators develop a list of \u201cdos and don\u2019ts\u201d. They may dilute expansive moderation guidelines into an easily remembered list of ethically unambiguous violations which they can quickly follow.<\/p>\n<p>These strategies reveal how the very structure of the moderation industry impedes thoughtful decisions and makes thorough judgement impossible.<\/p>\n<p>What should we take away from this?<\/p>\n<p>Our findings show that moderation decisions aren\u2019t just shaped by platform policies. The precarious working conditions of moderators play a crucial role in how content gets moderated. <\/p>\n<p>Online platforms can\u2019t put into place consistent and thorough moderation policies if the moderation industry\u2019s employment practices are not improved too. We argue that content moderation and its effectiveness are as much a labour issue as it is a policy challenge.<\/p>\n<p>For truly effective moderation, online platforms must address the economic pressures on moderators, such as strict performance targets and insecure employment.<\/p>\n<p>We need greater transparency around how much platforms spend on human labour in <a href=\"https:\/\/5rightsfoundation.com\/resource\/advancing-trust-safety-systems-and-standards-for-online-safety-professionals\/\" rel=\"nofollow noopener\" target=\"_blank\">trust and safety<\/a>, both <a href=\"https:\/\/www.techpolicy.press\/its-past-time-to-take-social-media-content-moderation-in-house\/\" rel=\"nofollow noopener\" target=\"_blank\">in\u2011house<\/a> and outsourced. Currently, it\u2019s not clear whether their investment in human resources is truly proportionate to the volume of content flowing through their platforms.<\/p>\n<p>Beyond employment conditions, platforms should also redesign their moderation tools. For example, integrating quick\u2011access rulebooks, implementing violation\u2011specific content queues, and standardising the steps required for different enforcement actions would streamline decision-making, so that moderators don\u2019t default to faster options just to save time.<\/p>\n","protected":false},"excerpt":{"rendered":"Big tech platforms often present content moderation as a seamless, tech\u2011driven system. But human labour, often outsourced to&hellip;\n","protected":false},"author":2,"featured_media":14837,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18],"tags":[64,63,237,105],"class_list":{"0":"post-14836","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-internet","8":"tag-au","9":"tag-australia","10":"tag-internet","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/14836","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=14836"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/14836\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/14837"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=14836"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=14836"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=14836"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}