Low-effort, AI-generated work content that masquerades as substantive output is costing companies significant time and money whilst damaging workplace collaboration, according to research examining why organisations see little return on their generative AI investments.
The study by BetterUp Labs and Stanford Social Media Lab surveyed 1,150 US-based full-time employees and found that 40 per cent had received “workslop” – AI-generated content that appears polished but lacks substance to meaningfully advance tasks – in the previous month, reports Harvard Business Review.
Employees estimated that an average of 15.4 per cent of content they receive at work qualifies as workslop. Each incident requires an average of one hour and 56 minutes to address, creating an invisible tax of $186 per month per affected employee. For an organisation of 10,000 workers, this translates to over $9 million annually in lost productivity.
The phenomenon occurs mostly between peers at 40 per cent, but workslop is also sent to managers by direct reports 18 per cent of the time. Sixteen per cent flows from managers to teams or from higher levels. Professional services and technology sectors are disproportionately impacted.
Unlike traditional cognitive offloading to machines, workslop uniquely uses AI to offload cognitive work to other humans. Recipients must decode content, infer missed or false context, and often undertake rework or uncomfortable colleague exchanges.
The interpersonal costs prove substantial. Approximately half of surveyed employees viewed colleagues who sent workslop as less creative, capable and reliable than before receiving the output. Forty-two per cent saw them as less trustworthy, and 37 per cent as less intelligent. When asked how receiving workslop feels, 53 per cent reported annoyance, 38 per cent confusion and 22 per cent offence.
One third of recipients notify teammates or managers about workslop incidents, potentially eroding trust between sender and receiver. Thirty-two per cent report being less likely to want to work with the sender again.
The research identifies that workers with high agency and high optimism – termed “pilots” – use generative AI 75 per cent more often at work than those with low agency and low optimism. Pilots are more likely to use AI to enhance creativity, whilst the latter group more frequently uses AI to avoid doing work.
Researchers recommend that leaders model purposeful AI use, establish clear norms around acceptable use, and frame AI as a collaborative tool rather than a shortcut. Indiscriminate organisational mandates to use AI everywhere yield indiscriminate usage, encouraging employees to thoughtlessly copy and paste AI responses even when the technology isn’t suited to the task.