AI slop 2.0 is harder to avoid

Jensen Huang (CEO of NVIDIA) said something in a talk late last year that I keep coming back to:

"The poorly defined work is the most valuable of all work."

His point: as AI gets better at handling defined work (the structured, procedural, follow-the-steps stuff) the poorly defined work becomes more important, not less.

“Poorly defined” work is:

  • Work where the right answer doesn't exist yet.

  • Work where someone has to decide, using judgment and experience and instinct accumulated over years.

  • Where AI doesn’t meet quality yet.

He named four markers of that kind of human wisdom:

  • taste

  • first principles thinking

  • strategic subtraction

  • honoring your scars

I love all four.

But I want to talk about strategic subtraction — the art of knowing what to leave out. Always peak skill in any craft— removing the negative space is always the real artform. Ask any sculpter.

“I would have said less but it would have taken me longer”

— me on Mark Twain on Blaise Pascal

Unsurprisingly, strategic subtraction has become one of the slipperiest skills in AI-assisted work. And the better you get at using AI in your work, the more important it is to not lose focus on this.

An example from my own desk.

Two days ago, a client asked me what I'm seeing across other arts and culture organizations relative to AI anxiety and adoption. A CIO friend. Casual question. He wanted my read.

My workflow as full-time AI practitioner and researcher went like this:

  1. “Hey, I have a full catalog of cross-organizational interviews and survey responses to mine for anxiety themes.” I went into my Claude Project (loaded with all my interview analysis and survey data) and had Claude pull key themes and stats on that specific angle. Cross-referenced what came back for accuracy. Made language tweaks.

  2. Then I thought... “what if I put this into a data visualization?” Pie and bar charts with accessible-friendly colors. Done.

  3. “Hmm…what if I make it a two-page summary with their organizational data on one side and wider industry benchmarks on the other - so they can see how they're tracking?”

  4. Then... (snowballing now) “I just built a fresh brand token guide for my consultancy as a markdown file... let me brand this PDF.”

  5. Then... (full toboggan mode and having a great time) “I have a voice guide that keeps the prose sounding like me and not like a robot... let me wash it through that too.”

Proofed every word. A few iterations. Checked all the data. Nuanced the language.

Thirty minutes later I had a polished, branded document grounded in the organization's own data, cross-referenced with wider industry data, layered with my own anecdotal observations from working with cultural organizations on AI adoption.

100% not possible 6 months ago.

It came together fast and it felt like good work. I emailed it.

But something was bugging me. A small anxiety nagging.

A premise of good human communication is a well-matched “call and response.” Someone says something. The other person “yes… and” — real communication. This is what was bugging me.

My CIO client had essentially said "hey, what are you seeing in your work more widely about AI anxiety?" and I returned… a monolithic document with branding.

The mismatch between what he asked for and what I produced was notable. And (I realized later)… off base.

He didn't ask for a two-pager. The information in it was solid. But I’d just unwittingly produced AI Slop 2.0.

If someone had sent me that document, I'd have dropped it right into Claude and asked for a summary.

Tokens burned on over-producing. More tokens burned on the recipient needing to decode it. Net value compared to a thoughtful email: probably negative.

What I’m calling AI Slop 2.0

Most of us know AI Slop 1.0 by now. The original slop. Bloated verbosity. Purple gradients. Em-dashed to death (I refuse to relinquish em-dashes— I simply love them).

AI Slop 2.0 is different. The quality of the work is GOOD. Resulting from solid AI-human collaboration. But 2.0 Slop overshoots the target. It’s like having Hermione Granger answer every question in class.

Slop 2.0 is what happens when the tools make it so easy to over-produce that you do it reflexively. The data is real. The analysis is carefully vetted. The branding is sharp. The voice is tuned. But the output overshoots what the moment called for.

Slop 2.0: Not just for beginners.

The people most likely to produce Slop 2.0 are the ones who have built real skill with these tools and are exhilarated at the new pace of work.

The layer-cake of techniques I used — project-loaded context, targeted data pulls, cross-referencing, voice tuning, branded output — that represents tons of building tools, prompts and technique. Getting good at AI-assisted work is hard. It's a huge focus of what I train arts and culture organizations to do, because the distance between "I tried ChatGPT once" and "I can reliably produce high-quality work with AI" is enormous.

Mastering the tools matters. A lot.

And. That very mastery is what creates the Slop 2.0 trap. The better you get, the more frictionless it becomes to produce a polished artifact — and the easier it is to skip the question of whether a polished artifact is what the moment actually needs.

Skill without subtraction.

This isn't something you learn once and move past. The tools keep getting more capable. Every few months, something that used to be a 2-hour task becomes a 10-minute task. And every time the friction drops, the temptation to over-produce resets. It's a moving frontier.

Why I think this is a pattern worth naming.

I think Slop 2.0 is going to become a widespread challenge — not just for individuals but for organizations too. As AI tools become standard across arts and culture and everywhere else, keeping a genuinely human “call and response” is going to mean overcoming the “it’s so cool to produce this fancy thing” adrenaline rush.

Think about it from the “poorly defined work” framework. Knowing how to use Claude to create a branded data visualization from raw survey data? That's defined work. Structured, learnable and valuable. And AI keeps making it easier to do well.

But knowing whether a branded data visualization is what this particular person, in this particular moment, actually needs from you? That's poorly defined work. That answer comes from your precious human intuition. Invaluable.

That's taste. That's strategic subtraction. And you don't get it from the tool. You bring it to the tool.

That kind of knowledge has always been the irreplaceable core of cultural institutions. AI doesn't diminish it. If anything, AI makes it more important — because as the tools handle more of the defined work, the poorly defined work is increasingly where the real value lives.

The gap to close right now is the skills gap. Absolutely. Learn the tools. Get good at them. That's urgent and it's this year's work.

But as we get good? We're all going to need to develop sharper instincts about when to deploy our arsenal of new skills and when to hold back. When the moment calls for the full layer-cake treatment and when it calls for a plain email in Arial 10pt.

That's the craft. And it doesn't come from a prompt. It comes from us.

Kristin Darrow is founder of AI for Arts and Culture and works with cultural organizations on modernizing their skills, teams, culture and strategy for the AI era.

Next
Next

State of vibecoding in Feb 2026 (mad scientist version)