{"id":96867,"date":"2025-10-22T11:30:08","date_gmt":"2025-10-22T11:30:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/96867\/"},"modified":"2025-10-22T11:30:08","modified_gmt":"2025-10-22T11:30:08","slug":"how-employers-can-prevent-ai-work-slop","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/96867\/","title":{"rendered":"How employers can prevent AI \u2018work slop\u2019"},"content":{"rendered":"<p>Describing work as slop and sludge is not the ideal feedback. But the terms are a warning to employers of the risks and limitations of content generated by artificial intelligence.<\/p>\n<p>\u201cWork slop is a new form of automated sludge in organisations,\u201d says Andr\u00e9 Spicer, author and dean of Bayes business school. \u201cWhile old forms of bureaucratic sludge like meetings or lengthy reports took time to produce, this new form of sludge is quick and cheap to produce in vast quantities. What is expensive is wading through it.\u201d<\/p>\n<p>Many executives are championing new AI tools that help their staff to synthesise research, articulate ideas, produce documents and save time \u2014 but at times the technology may be doing the opposite.<\/p>\n<p>Deloitte this month announced it would be partially refunding the Australian government for a report it produced that contained mistakes made by AI, demonstrating the risks for professional service companies.<\/p>\n<p>The potential harm is not only external \u2014 to corporate reputations \u2014 but also internal, as poor AI generated content can result in bloated reports with mangled meanings and excessive verbiage, creating extra work for colleagues to decipher. <\/p>\n<p>While AI significantly decreases the effort to put pitches and proposals together, it does not \u201cequally decrease the costs of processing this information\u201d, adds Spicer.<\/p>\n<p>Michael Eiden, managing director at Alvarez &amp; Marsal\u2019s digital technology services, says: \u201cThe accessibility of generative AI has made it easier than ever to produce work quickly \u2014 but not necessarily to the highest standard.\u201d<\/p>\n<p>A recent report by Better Up, the coaching platform and Stanford Social Media Lab, found that on average, US desk-based employees estimate 15 per cent of the work they receive is AI work slop.<\/p>\n<p>The emerging problem heightens the need for clear policies and increased monitoring of AI\u2019s use, as well as staff training.\u00a0<\/p>\n<p>The Financial Reporting Council, the UK accountancy regulator, warned in the summer that the Big Four firms were failing to monitor how automated tools and AI affected the quality of their audits, even as firms escalate their use of the technology to perform risk assessments and obtain evidence. Last week, one of the professional accountancy body issued a report on AI\u2019s ethical threats \u2014 such as fairness, bias and discrimination \u2014 to finance professionals.<\/p>\n<p>Meanwhile, the UK High Court has called for the legal profession to be vigilant after two cases in which lawyers were thought to have used AI included written legal arguments and witness statements containing false information, \u201ctypically a fake citation or quotation\u201d. <\/p>\n<p>\u201cFirms shouldn\u2019t simply hand employees these tools without guidance,\u201d says Eiden. \u201cThey need to clearly define what good looks like.\u201d <\/p>\n<p>A&amp;M is developing practical examples and prompt guides to help staff use AI responsibly and effectively. \u201cFor high-stakes work\u201d, says Eiden, \u201chuman review remains non-negotiable \u2014 the technology can assist, but it should never be the final author.\u201d<\/p>\n<p>James Osborn, group chief digital officer at KPMG UK and Switzerland, agrees, stressing the importance not just of staff verifying the accuracy of the content but also \u201csuitable governance processes\u201d to ensure the technology is being used appropriately.<\/p>\n<p>It is not just AI\u2019s ability to help with the substance of employees\u2019 work that is under scrutiny, but also administrative tasks, including scheduling meetings and taking notes, according to a report by Asana. It highlighted workers\u2019 complaints of AI agents sending false information and forcing teams to redo tasks, adding to their workload.<\/p>\n<p>Where employers are not setting out a clear policy on AI\u2019s use in the workplace, staff may use it on the sly. A report by Capgemini this year found that 63 per cent of software developers were using unauthorised tools, which have serious ethical and security implications, such as sharing company data.<\/p>\n<p>It is not only ethics and errors that are a problem but the demands on staff to identify and fix \u201cwork slop\u201d, a term coined this month by researchers to describe \u201cAI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task\u201d. Resulting content can be \u201cunhelpful, incomplete, or missing crucial context about the project at hand\u201d, they wrote in a piece published in the Harvard Business Review. This means the receiver may have to \u201cinterpret, correct, or redo the work\u201d.\u00a0<\/p>\n<p>Kate Niederhoffer, social psychologist and vice-president at BetterUp Labs, a research arm of the coaching service, and one of the report\u2019s authors, insists employees are not creating work slop for \u201cnefarious\u201d reasons but typically because they \u201chave so much work to do\u201d. Dividing users broadly into two mindsets, she describes \u201cpilots\u201d as those who are curious about AI, using it to augment their capabilities rather than replace them, and \u201cpassengers\u201d who are begrudging, burdened by work, and who use AI to buy themselves more time. \u201cOne of the reasons people are creating work slop may be the result of too few people, everything feeling urgent and important.\u201d\u00a0<\/p>\n<p>Niederhoffer urges managers to give staff support, and to be clear about the likely effect of poor work on colleagues. \u00a0<\/p>\n<p>Clarity about the purpose and use of AI is key, says Joe Hildebrand, managing director of talent and organisation at Accenture. \u201cWhen you clearly understand the tangible and specific value AI can bring to your context, you are better able to design and deploy tools that create meaningful impact, not just noise.\u201d<\/p>\n<p>Mark Hoffman, head of Asana\u2019s Work Innovation Lab, advocates four core foundations to AI use, starting with guidelines that balance legal, IT and security concerns with practical business needs. He also recommends training that goes beyond the technical skills of prompt writing to teach softer delegation skills; accountability rules that clarify who is responsible when things go wrong; and quality control standards that prioritise accuracy and error tracking alongside efficiency.\u00a0\u201cThe goal is not to just figure out what behaviours to prevent, but what behaviours to empower and enable.\u201d <\/p>\n<p>Hildebrande stresses the importance of \u201creversibility\u201d. \u201cEvery AI deployment should include a human override or kill switch. Monitoring how often humans reverse AI decisions and using those insights to improve the system can enhance trust.\u201d<\/p>\n<p>As AI increasingly automates work processes, manual input will become increasingly critical, some experts say. Spicer observes that more universities are asking students to take a written exam or a verbal presentation, instead of an electronic submission. \u201cIt is likely firms will increasingly rely on analogue input and processes to make high-stakes decisions.\u201d\u00a0<\/p>\n<p>Stuart Mills, assistant professor of economics at Leeds University, believes managers have become swept up by \u201cthe excitement of AI and immediateness of the results\u201d and distracted from \u201casking big questions about organisations and productivity.\u201d<\/p>\n<p>The tendency is to measure knowledge work output by lines of code or numbers of reports, he says, which can create \u201can illusion of productivity\u201d.<\/p>\n<p>He suggests: \u201cManagers need to ask, \u2018What do we do to create value? And can we use AI in our current structure, or do we need to change our structure?\u2019 I don\u2019t see those questions being asked.\u201d\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"Describing work as slop and sludge is not the ideal feedback. But the terms are a warning to&hellip;\n","protected":false},"author":2,"featured_media":96868,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-96867","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/96867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=96867"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/96867\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/96868"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=96867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=96867"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=96867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}