March 20, 2026
How Should Your Strategy Change When Coding Costs Shrink to Nothing?
When coding costs shrink to nothing, the smart strategy isn't perfecting one version. It's generating many and synthesizing the best.
A ceramics teacher divided his class into two groups on the first day.
One group would be graded on quantity. On the last day of the semester, he'd weigh all the pots they made. Fifty pounds earned an A, forty earned a B, and so on. The other group would be graded on quality. They only needed one pot. But it had to be perfect.
At the end of the semester, something surprising happened. The best pots came from the quantity group. Not the quality group.
While the quantity group churned out pot after pot, learning from mistakes, refining technique through repetition, the quality group spent the semester theorizing about perfection. Deliberating. Second-guessing. They had little to show for it beyond mediocre work and untested theories.
The story comes from Art & Fear by David Bayles and Ted Orland. It was published in 1993. And it might be the best framework I've found for thinking about what happens when AI makes code nearly free to produce.
Because when the cost of making something drops to zero, your entire strategy should shift from perfecting one thing to generating many things. And most people haven't made that shift yet.
The Way Most People Build
Here's what I see when I talk to people building with AI coding agents.
They start a project. They prompt their agent. They get code back. They iterate. They refine. They go round after round, nudging the AI closer to what they want, fixing bugs, adjusting the architecture, polishing the design.
It works. Kind of.
But it's the quality group approach. One version, made as perfect as possible through deliberation and revision. And just like those ceramics students, the process optimizes for thinking about the work instead of doing more of it.
Think about it. When you're iterating on a single codebase, you're trapped in that codebase's assumptions. The architectural decisions made in the first ten minutes become the foundation everything else is built on. If the initial approach was 80% right, you spend all your iteration budget chasing that last 20%, when a completely different starting point might have gotten you to 95% on the first pass.
Most people don't even consider this because code has always been expensive. Writing it took time. Rewriting it took more time. Starting over felt like failure.
But here's the thing. Code doesn't cost what it used to.
Four Teams, One Vision, Four Folders
I'd already built out all the definitions, roles, skills, and templates my agents need to write code the way I want it. That groundwork was done. But the ceramics story kept nagging at me.
So the other day, I did something I'd never done before.
I asked my agents to form four separate teams. All four read the exact same vision document. Same requirements. Same constraints. Same goals. Then I assigned each team to its own folder: A, B, C, and D.
And each team built the software independently.
No coordination between teams. No shared progress. Four separate implementations of the same idea, built in parallel, with zero communication.
When they were done, I asked my AI to review all the code across all four folders. Not to pick a winner. To pick the best implementations, the best design decisions, the best algorithms from each version and synthesize them into a fifth version.
What I got was fantastic.
Why This Works (and Why It Didn't Used to)
The ceramics teacher understood something most people still miss. The path to quality runs through quantity. But he needed an entire semester for his students to produce that quantity. The iteration happened across weeks and months, pot after pot.
Here's my twist: when your builders are AI agents, you don't need the calendar.
You can run the quantity experiment in an afternoon. Four versions, built simultaneously. No waiting for lessons from version one before starting version two. The lessons emerge in the comparison, not in the sequence.
Each team made different architectural choices. Team A might have optimized for simplicity. Team B might have found an elegant algorithm for the hardest problem. Team C might have nailed the error handling. Team D might have structured the code in a way that's easier to extend later.
No single team produced the best version. But between them, they produced every component of the best version. The synthesis step just assembled the puzzle.
This only works because coding costs shrank. When code costs real time and real money, building four versions is four times the expense. That's a ridiculous luxury. But when AI agents can each produce a complete implementation in the same window of time you'd spend iterating on one? Four versions costs roughly the same as one.
The math flipped. Iteration used to be cheap and generation was expensive. Now generation is cheap and the only expensive resource is your judgment about what's good.
What This Actually Changes
If you've been treating your AI agents like junior developers, you probably already know they get better with more reps. More context, better prompts, tighter feedback loops. That's the standard playbook and it works.
But the standard playbook is still the quality group approach. One codebase, refined through iteration.
The quantity group approach says: stop refining one version. Generate several. Then bring your judgment to the comparison. You're not the potter anymore. You're the ceramics teacher, evaluating which pots are best and understanding why.
This matters most when you're building something new. When you don't know the optimal architecture yet. When the problem has multiple valid approaches and you won't know which one is best until you see them side by side.
If you're a product person with a clear vision but limited coding background, this is especially useful. You don't need to know which architectural approach is best before you start. Let four teams explore four approaches, then pick the best parts of each. Your judgment about what works for users doesn't require you to have opinions about code architecture.
The Real Lesson From the Ceramics Class
Bayles and Orland's point was that making teaches you more than thinking about making. The act of creation, repeated, produces understanding that theory never can.
When the cost of creation drops to nearly nothing, you don't have to choose between quantity and quality. You get both. In parallel. In an afternoon.
The question I keep coming back to: where else does this apply?
If generating code four times produces a better synthesis than iterating on one version four times, what happens when you apply the same logic to product prototypes? To marketing copy? To architecture decisions?
When making costs nothing, the only remaining question is whether you're willing to let go of the instinct to perfect one thing and instead trust the process of generating many things and selecting the best.
A story. An insight. A bite-sized way to help.
Get every article directly in your inbox every other day.
I won't send you spam. And I won't sell your name. Unsubscribe at any time.
About the Author
Chris Lema has spent twenty-five years in tech leadership, product development, and coaching. He builds AI-powered tools that help experts package what they know, build authority, and create programs people pay for. He writes about AI, leadership, and motivation.