Could you tell, after one paragraph, that a document was written by AI? Most people can. The phenomenon they are flagging is widespread enough that it now has a name: workslop, low-quality, unvetted AI content. 40% of U.S. desk workers received it last month, and once it lands in an inbox the costs cascade. Recipients eat an estimated $186 per worker per month in cleanup, according to BetterUp and Stanford's Social Media Lab. The sender pays a second time, in reputation: about half of recipients now view them as less creative, capable, and reliable. A separate Workday study delivers the closing twist: roughly 37% of the productivity gains AI was meant to deliver are immediately wiped out by rework. Faster generation, more cleanup, net zero.
The instinct after numbers like these is to look at the technology. Newer model, better prompts, tighter AI policies. Those have their place. But they do not close the gap that makes workslop unmanageable: AI now generates output faster than most teams have learned to apply judgment to it. That is partly a tooling problem and squarely a thinking, HR, and L&D one.
Closing it takes a different lens on the problem. Whole Brain® Thinking, built on decades of cognitive neuroscience research at Herrmann, categorizes thinking into four cognitive preferences everyone draws on in different proportions: analytical (facts and evidence), practical (process and execution), relational (people and communication), and experimental (possibility and the unconventional). The HBDI® maps where each individual and team naturally leans and where they tend to drop off. Once that pattern is visible, what makes workslop possible comes into focus: one preference carries an entire AI interaction while the other three sit out. Each preference, when active, catches a different kind of AI failure. The four sections that follow take each one as a checkpoint.
Analytical thinking: is this actually true?

Thinking analytically about AI output catches what fluency hides. AI models generate confident sentences whether the underlying information is verified, partially true, or entirely fabricated. Without that scrutiny, three things tend to slip through: invented citations, misstated statistics, and outputs that simply rephrase the prompt as if it were insight.
The trouble usually starts at the prompt. "Write me a strategy document" returns the average of every strategy document on the internet. "Analyze these three market scenarios against our Q3 revenue targets" gives the model something to grip and forces the person prompting to clarify what they are actually trying to figure out.
The same rigor applies on the way out. The Deloitte Australia case is the example most teams now reference: a government report submitted by the firm contained fabricated academic citations and a misquoted federal court judge. The cost was a partial refund and a hit to a global consultancy's credibility. A useful gut-check before sending anything: would I stake my professional reputation on every sentence here? Watch in particular for outputs that feel substantial but are really just the prompt restated in more words. Volume masquerading as insight is the most common form of workslop.
The HBDI® shows how analytical thinking is distributed across a team: who leads with it naturally, and which colleagues lead with something else first, especially under deadline pressure when AI outputs are most likely to ship as is.
Practical thinking: is there a system catching this?

Thinking practically about AI workflows catches the systemic source of workslop. The phenomenon proliferates not because individuals are careless but because most organizations have no shared standard for how AI-assisted work gets reviewed, so cleanup defaults to whoever is on the receiving end. A practical lens turns AI quality from individual willpower into a system the team can rely on.
Decisions worth making upfront include which tasks AI is allowed to touch, which review steps happen before output goes out, and what good looks like for the team. For writing tasks specifically, the highest-quality results come from constraint, not freedom. Short, scoped prompts where someone has already done the thinking and given the model rich context tend to need very little cleanup. Open-ended drafts almost always do.
After the output arrives, a short quality pass catches what speed missed. One underrated technique is to ask the AI for comments and suggestions on existing writing rather than letting it produce a full redraft, which keeps the author in charge of what to accept. Even thirty minutes between generation and sending often surfaces what was assumed to be on the page but is not.
Whole Brain® Thinking turns this from individual habit into a team-level pattern: which preferences each person leads with in AI work, and which their colleagues lead with instead.
Relational thinking: how will this land on the other end?

Thinking relationally about AI output catches what speed strips out. The Stanford and BetterUp study found that workers who receive workslop feel annoyed (53%), confused (38%), and offended (22%). When the relational lens is absent from the loop, what AI removes (specificity, warmth, audience awareness) is what the recipient notices most.
A relational orientation starts before the prompt, not after the output. The colleague on the other end is a specific person with a specific situation and a specific style. Skipping straight to the model means skipping the human's best thinking and producing something built for nobody in particular.
After the output appears, the test is to read it as if you were the recipient. Add specificity, warmth, the small acknowledgements that signal you actually thought about the person. The defining feature of workslop is interchangeability: swap the sender's name and nobody would notice. If that is true of what is about to be sent, it is not ready.
Whole Brain® Thinking makes that pattern visible at the team level: who leads with audience awareness naturally, and which colleagues lead with the other three preferences first.
Experimental thinking: expanding thinking, or replacing it?

Thinking experimentally about AI use makes a clean distinction. AI is at its best when it stretches the options under consideration, challenges assumptions, and surfaces angles that were missed. It is at its worst when it produces the final deliverable.
The high-value pattern for brainstorming is to start with your own ideas first, then ask AI to challenge them, argue the opposite position, or suggest approaches from adjacent fields. The value is in the collision between human thinking and the model's output, not in handing the thinking over.
When the output comes back, curation is the human job. AI will produce ten ideas when two are needed, and recognizing which has genuine spark and which is fluent filler is something the model cannot do. Neither is triangulation: cross-referencing against independent research or a colleague with different expertise. AI-generated insight that has not been tested against reality is speculation with good formatting.
Whole Brain® Thinking is how a team sees its own distribution: who leads naturally with experimental thinking, and which colleagues lead with the three preferences that pair best with it.
Why review alone isn't enough
Because each person naturally leads with some preferences and engages less with the opposing ones, and the four preferences come in opposing pairs. A reviewer who leads relationally naturally feels the warmth in the AI's email; whether the numbers add up sits further from where their attention goes first. A reviewer who leads analytically naturally fact-checks the citations; how transactionally the message reads sits further from where their attention goes first. A reviewer who leads practically naturally spots a timeline that won't hold; whether the underlying approach is just the conventional one sits further from where their attention goes first. A reviewer who leads experimentally naturally pushes the AI to argue the opposite; whether the proposed steps are realistic sits further from where their attention goes first.
This is the expert paradox under workslop: the reviewer naturally leads in one direction, and the AI's weakness sits in the opposite one. Whole Brain® Thinking and the HBDI® make those leans visible, person by person. With that pattern in view, review stops being a single person's judgment call and becomes a question of whose natural lean needs to be in the loop before the work goes out.
What does this mean for AI adoption?
Workslop is what comes out when AI gets used at the speed of one preference and the depth of none. The four preferences are not a checklist to memorize. They are how the brain naturally divides the work of producing something good, and they show up in AI output as clearly as fingerprints when one is missing.
The organizations that close the cognitive gap will not be the ones that ban AI, nor the ones that adopt it uncritically. They will be the ones whose people understand which preferences they naturally lead with, and which preferences their colleagues lead with instead. AI amplifies whichever preference someone brings to it. The question for HR and L&D leaders is whether their teams are bringing one or four.
That self-awareness is something a team can develop. The HBDI®, the Herrmann assessment that translates Whole Brain® Thinking into something a team can see, makes thinking preferences visible person by person and team by team. Once a team can see its own pattern, the question of how to use AI well stops being abstract. It becomes a concrete conversation about who needs to bring what.
Request a demo to see how Whole Brain® Thinking and the HBDI® apply to AI adoption on your team.




