April 27, 2026

The Slop Isn't Coming From the Robot

Critics are right that most AI content is slop. They're wrong about where it comes from. The slop isn't the model. It's people typing one sentence into a blank chat window and pasting whatever falls out. Here's what separates the two camps, and the test you can run on anyone claiming to use AI well.

Here's something I've stopped arguing about: whether AI produces slop.

It does. Most of it. Scroll LinkedIn for ten minutes and you'll find the receipts. The same em-dashes, the same "in today's rapidly evolving landscape," the same hollow scaffolding wrapped around no actual point. The skeptics are right about what they're seeing.

What they're wrong about is where it comes from.

The argument I keep hearing

The standard objection runs like this: AI writing is inauthentic because a machine generated it. If you used Claude or ChatGPT to help write something, your fingerprints aren't on it anymore. The reader is being fooled. The work is fake.

I get the instinct. I share the irritation. I've also stopped buying the conclusion.

What's actually producing the slop

The slop isn't coming from the model. It's coming from people typing one sentence into a blank chat window and pasting whatever falls out.

That's not AI writing. That's nobody writing.

Think about what those prompts look like. "Write me a LinkedIn post about leadership." "Draft a blog post on AI adoption." "Give me five tips for productivity." There's no point of view in the input, so there's no point of view in the output. The model fills the void with statistical averages. The most predictable phrasing, the safest structure, the words that show up most often in content that looks like the thing you asked for.

Of course it sounds generic. You asked for the average and the average is what you got.

The enemy here isn't AI. The enemy is the fantasy that you can skip the upstream thinking and still produce something that sounds like you. That fantasy was always going to lose. AI just made the losing faster and more visible.

Where I land on this

I use Claude almost every day for content work. I'm not defensive about it and I'm not embarrassed about it. Here's why.

Two documents do most of the work in any session. One is a voice profile, about 1,500 words that capture how I actually write. Sentence structure. Signature phrases. The setup-payoff rhythm I use. The pop culture references I reach for. The exact words I refuse to use ("delve," "leverage," "robust," "in today's fast-paced landscape," that whole tired vocabulary). The transformation examples that show generic prose next to my voice so the model can see the difference, not just hear me describe it.

The second is an audience profile. Four micro-segments I write for, each with their own goal pyramid, their own pain language in their own words, their own hiring moments, the specific situations that send them looking for help. I know which of these segments overlap and which don't. I know which topics are universal and which serve only one. There's a scoring template at the end that I run topics through before I write. If a topic doesn't fit a segment, I don't write it.

Together those two documents took years to produce. Not years of typing. Years of writing, watching what landed, talking to clients, noticing my own patterns, refusing the words that didn't sound like me, building the audience model one conversation at a time.

That work happened before I opened a single chat window.

The actual question to ask

The authenticity test isn't "did a human type every word." It's whether the piece reflects a specific human's judgment, experience, and point of view.

By that standard, content I produce with AI assistance is more authentically mine than content I'd type at 11pm when I'm tired and reaching for the nearest cliché. The profiles are doing exactly what good editorial direction has always done. Keeping the writing aligned with the writer, especially when the writer is fatigued, distracted, or tempted to coast.

Don't get me wrong. There's a version of AI-assisted writing that is exactly what the skeptics describe. I just don't do that version. I don't think most serious people who use these tools well do it either. We're not skipping the work. We're moving the work.

A camera analogy that actually fits

Photography had this exact debate when it showed up. Critics argued that mechanical reproduction couldn't be art because the machine did the work. The lens, not the human, captured the image. Where was the craft?

The answer turned out to be: the machine does the capture, the human does the seeing. Composition, timing, subject, light, what to include and what to leave out. All of that is upstream of the shutter. A photographer with judgment produces something a tourist with the same camera never could.

Same structure here. The model does the generation. I do the seeing.

The voice profile is evidence of seeing. Nobody who hadn't spent twenty-plus years writing in their own voice could produce that document. The audience profile is evidence of seeing. Nobody who hadn't talked to hundreds of CEOs, technical specialists, founders, and authors could name those four segments at that level of detail.

Hand someone the camera and they don't become a photographer. Hand someone Claude and they don't become a writer. The work that makes the output good was always upstream of the tool.

The harder version of the skeptics' argument

There's a version of the anti-AI position I do take seriously: most people are skipping the upstream work. Most people are producing slop because they don't have the voice profile or the audience model to feed in. They couldn't articulate their voice if you asked them to, because they never had a strong one. The tool is exposing that gap.

That's not an argument against AI. That's an argument for doing the work the tool requires.

If you can't articulate, in 500 or more words, how you actually write, what words you use, what sentence shapes, what your hooks look like, what you refuse to say, then the model doesn't have anywhere to anchor and you'll get average output. Not because the model failed. Because you haven't brought anything to it.

The cost of bad AI writing isn't the AI. It's the moment of seeing your own thinking laid bare and realizing how thin it was.

What I'd tell a skeptical client

If you're considering whether to work with someone who uses AI in their content process, ask one question: show me what you give the model.

If they have a voice profile, an audience model, a topic-fit scoring system, and a polish pass against a list of phrases they refuse, they're not letting AI write. They're using AI as a fast, patient assistant who writes inside the constraints they spent years building.

If they don't have any of that, you're right to walk. But you'd be right to walk from their work even if AI didn't exist. The slop was always going to come out of them. AI just made it cheaper to produce more of it.

My point is simple

The slop isn't coming from the robot. It's coming from people who didn't do the work before they sat down at the keyboard, and the keyboard, in this case, happens to be a chat window.

The writers who do the upstream work get more leverage than ever. The writers who don't, get exposed faster than ever.

That's the actual story. The rest is noise.

A story. An insight. A bite-sized way to help.

Get every article directly in your inbox every other day.

I won't send you spam. And I won't sell your name. Unsubscribe at any time.

About the Author

Chris Lema has spent twenty-five years in tech leadership, product development, and coaching. He builds AI-powered tools that help experts package what they know, build authority, and create programs people pay for. He writes about AI, leadership, and motivation.

Chris Lema

AI is moving fast. You don't have to figure it out alone.

I help business leaders cut through the hype and put AI to work where it actually matters.