By: Charu A. Chandrasekhar, Avi Gesser & Patty (Debevoise & Plimpton Data Blog)
In this piece, authors Charu A. Chandrasekhar, Avi Gesser, and Virtual AI Associate ‘Patty’ (Debevoise & Plimpton Data Blog) share their analysis of “workslop,” a newly identified problem in AI adoption that has emerged as the primary challenge now that confidentiality concerns have largely been addressed through enterprise-grade AI tools with robust cybersecurity controls. Workslop refers to AI-generated content created for work-related tasks that appears authoritative and well-researched but actually contains errors, lacks substance, merely repackages existing concepts in different words, or is not fit for purpose.
The authors explain how workslop typically occurs in the context of “stretch” assignments, where junior employees are asked to tackle problems beyond their current expertise level as part of their professional development. Traditionally, these employees would gather and analyze information themselves before presenting findings and potential solutions to senior employees who make final decisions. However, some junior employees are now using AI to conduct most or all of this analysis rather than doing the work themselves, which may not inherently be problematic if proper controls and training programs are in place.
The situation becomes problematic when junior employees lack the experience to verify AI output and pass along AI-generated work product to senior employees without disclosing its origins. This often happens when junior employees face excessive workloads with tight deadlines, making them more inclined to submit AI-generated content that appears accurate without conducting thorough independent verification. The authors identify this as creating significant resource allocation and risk control issues for organizations.
The authors outline two major consequences of workslop. First, misallocation of resources occurs when senior employees identify the poor-quality content and must conduct research from scratch to ensure analytical integrity and verify facts—tasks that should have been completed by junior employees initially. Research from BetterUp Labs indicates that approximately 40% of workers have received such poor-quality AI-generated content in the past month, with each instance requiring around two hours to clean up, thereby draining productivity and eroding trust in AI. Second, increased risk with less mitigation arises when senior employees fail to identify workslop and rely upon work product that appears accurate and sophisticated but is actually wrong or incomplete, potentially leading to flawed internal decisions, damaged client relationships, reputational harm, and legal liability when senior employees mistakenly assume the work has been properly researched and vetted. Finally, the authors provide their list of Eight Ways to Reduce the Risks of Workslop…