AI offers workers the promise of increased efficiency and productivity, freeing them up from repetitive work to tackle more complex tasks. But as companies have rolled out AI tools to employees, many are facing a different challenge: AI-generated work that does the opposite.
The quality of AI-generated content depends in large part on the skills of the person collaborating with the tool, and not everyone has been equipped with the right skillsets in this area, resulting in what the Stanford Social Media Lab and Betterup Labs have coined AI “workslop” — which they define as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
“AI workslop is what happens when organizations use the wrong AI at the wrong time, deploying large language models designed for creativity and reasoning into situations that demand precision, governance, and reliability,” says Don Schuerman, CTO of Pegasystems. “The result are outputs that look polished on the surface, but crumble under scrutiny — inconsistent or poor recommendations, hallucinations, or actions that don’t align with an organization’s policies or regulatory compliance.”
What is AI workslop and how does it happen?
According to a report from Stanford Social Media Lab and Betterup Labs published in Havard Business Review, 40% of 1,150 US-based employees surveyed said they had received AI workslop from a coworker within the past month — estimating that it makes up around 16% of the content they receive at work. Workslop is typically sent between coworkers (40%); however, workers also report instances of workslop being sent to managers by direct reports (18%), and vice-versa (16%). And while AI workslop occurs across every industry it’s most prevalent in the professional services and technology industries, according to the survey results.
Erik Roth, founder of McKinsey’s generative AI platform Lilli, says one example of AI workslop is when “employees take outputs from large language models almost verbatim” and pass that off as the final content.
This type of AI content is usually of even worse quality because the employees who take this approach often aren’t versed in crafting AI prompts, don’t know how to spot AI hallucinations or false information, and don’t take the time to ensure the AI-generated results pass human standards.
AI workslop is ultimately “content that’s thin on context, light on domain judgment, and shipped with little human refinement. It’s the illusion of productivity without real value creation,” says Roth.
Paul Farnsworth, president at Dice, says he’s seen AI content that at first glance “looks polished,” but “falls apart on a second read.” Whether it’s incorrect math, data, logic, or content that “just doesn’t say anything meaningful,” his main caution is that “over-reliance on AI can create a false sense of efficiency.” It gives you the illusion that you’re working faster, when “you’re actually spending more time revisiting and clarifying later,” he says.
AI-generated workslop generates additional work and frustration
Low-quality AI content passed off to colleagues often creates more work for those on the receiving end. According to Stanford Social Media Lab and Betterup Labs, AI workslop burdens employees with nearly two hours of additional work on average, as they are forced to parse the content to correct errors, identify false information, and sometimes rewrite the content or code from scratch.
This effort carries an “invisible tax” of up to $186 per month, the labs estimate, which can add up fast. For example, an organization of 10,000 employees with a 41% prevalence of AI workslop can incure nearly $9 million in lost productivity per year, the labs calculate.
AI workslop also creates tension among coworkers. When asked how they felt receiving this type of content, employee responses included annoyed (53%), confused (38%), and offended (22%). The report also found that colleagues view other colleagues who use AI as “less creative, capable, and reliable” than they did before, while 42% said they viewed coworkers as “less trustworthy” and 37% said “less intelligent.”
It’s also causing employees to report one another to management, with 34% saying they have notified other teammates or managers about AI workslop, and 32% saying they are less likely to want to work with someone after receiving workslop.
“Poorly managed AI doesn’t just slow work down, it erodes trust. When employees are constantly fixing or fact-checking AI-generated outputs, it creates fatigue and skepticism. Instead of becoming a productivity partner, AI becomes another item on the to-do list, and one that generates more work instead of reducing it,” Pegasystems’ Schuerman says.
Managing and avoiding AI workslop
The first lines of defense against AI workslop are education and governance, Schuerman says, advising IT leaders to equip employees with AI literacy through training and experimentation, and to encourage them to questions AI outputs and understand how AI generates results.
IT leaders should also build in guardrails and ensure employees have access to the right tools for the right tasks, he adds. “When AI systems are integrated into structured workflows, with visibility, feedback loops, and audit trails, workers don’t have to guess what ‘good’ looks like. They see it modeled in every task.”
Dice’s Farnsworth also advocates for guidance and governance. “Organizations need to remember that AI is only as good as the human behind the wheel. If you’re not investing in guidance and governance, AI tools can quickly become a liability instead of an advantage,” he says. “The key is using AI with intention — know what you’re asking it to do, and be ready to step in where needed.”
Not everyone is going to take to AI right away. Still, it’s important for IT leaders to be in active in taking employees along on the AI journey, showing them examples of strong and weak AI content so they can learn and understand.
“Training employees to effectively use generative AI starts with demystifying it,” Farnsworth says.
As employees get more comfortable and savvier using AI, workslop will decrease over time.
“Ultimately, AI quality isn’t just a technical issue; it’s a cultural one,” Pegasystems’ Schuerman says. “Organizations that invest in predictable, governed AI not only get better results, but they also build a workforce that trusts and amplifies those systems responsibly.”
Read More from This Article: AI ‘workslop’: The new productivity killer only training can stop
Source: News

