A little more than twenty years ago, I was giving a talk in Orlando. We had built a code generation platform, and I was showing it off at an industry event.
Someone in the audience raised their hand. “Are you worried about giving up control? You're letting the computer generate code for you.”
I told them my “two folks in the back of the room” story.
It goes like this.
Imagine a meetup where someone is showing off some new code they wrote in C. There are two folks in the back of the room. “That's not real programming,” they say. “They don't have to deal with memory layout.”
Not too many years later, we're at another meetup where someone is showing off some new code they wrote in Visual Basic. There are two folks in the back of the room. “That's not real programming,” they say. “They don't have to deal with garbage collection.”
Not too many years later, we're at another meetup where someone is showing off some new code they wrote in Javascript. There are two folks in the back of the room. “That's not real programming,” they say. “They don't have to deal with compilation strategy.”
Trust me when I tell you this: there's always going to be someone in the back of the room, they've been there for decades, telling you that what we're doing right now isn't real programming.
In fact, you don't have to attend a meetup in person. Just open up Twitter and you'll see people telling you that AI code generation may be fun or interesting, but it's not real programming.
Here's what I've been thinking about lately: when exactly did any of us have this “control” we're so afraid of losing?
The Timeline Nobody Talks About
Let me walk you through what actually happened.
Registers → Assemblers. You controlled every memory address. But addressing was tedious and error-prone. So you delegated addressing to assemblers. You gave up control. You gained velocity.
Assembly → C. You controlled every instruction. But the cognitive load was massive. So you delegated instruction sequencing to compilers. You gave up control. You gained portability.
C → Managed Languages. You controlled memory allocation and lifetime. But memory bugs destroyed teams. So you delegated memory management to garbage collectors. You gave up control. You gained safety.
SQL → ORMs. You controlled query execution plans. But query tuning consumed entire careers. So you delegated query planning to ORMs. You gave up control. You gained development speed.
Code → Frameworks. You controlled program flow. But boilerplate buried the interesting work. So you delegated control flow to frameworks. You gave up control. You gained focus on what mattered.
Applications → Runtimes. You controlled scheduling, compilation strategy, execution order. Actually, no. You never controlled those. The runtime did. And the JIT compiler. And the OS scheduler.
You just thought you did because you typed the code.
The Illusion Was Always Retrospective
Here's what nobody wants to admit: the sense of “control” was a story we told ourselves after the fact.
We believed we controlled code because we wrote it, it compiled successfully, and it behaved predictably in local tests.
But in production?
JIT compilers rewrote it. CPUs reordered it. Memory models relaxed it. Caches mutated it. Networks broke it. OS schedulers interrupted it.
We've been living in emergent systems since at least the 90s.
AI just makes that emergence visible.
What AI Actually Changes
Here's what AI does not change: you still don't control execution. You still don't control optimization. You still don't control runtime behavior. You still debug outcomes, not instructions.
That's been true for thirty years.
Here's what AI does change: it moves abstraction above code itself.
Instead of expressing intent through syntax, control flow, and boilerplate, you express it through prompts, constraints, acceptance criteria, and tradeoffs.
This is the same move as Assembly → C. The same move as SQL → ORMs. Just one layer higher.
The Job Was Never “Write Code”
Here's the uncomfortable truth that makes this easier once you accept it:
The job was never “write code.”
The job was define behavior, bound risk, detect failure, correct drift.
That hasn't changed.
What has changed is that code is no longer the primary artifact.
The primary artifacts now are prompts, tests, contracts, observability, and feedback loops.
In other words: you don't own the code. You own the outcome.
That's not new. It's just no longer deniable.
Why This Feels Threatening
Control felt local when you typed every line, knew every symbol, and the system was small enough to hold in your head.
But that sense of control was already gone at scale. The moment your system had async processes, distributed state, or external dependencies, you stopped controlling it and started steering it.
AI doesn't remove control. It removes the comforting fiction of control.
The Pattern That Matters
Here's what I've noticed about every abstraction shift:
Every transition moves responsibility upward. Less responsibility for how. More responsibility for what. And even more responsibility for why.
The worst developers at each transition were the ones who clung to the old layer, mistook familiarity for mastery, and confused control with competence.
The best ones understood the system beneath the abstraction, operated confidently above it, and knew when to drop down a layer.
That's still the game.
The question isn't whether AI is taking your control. The control was always an illusion. The question is whether you'll be the developer who climbs to the next layer, or the one standing in the back of the room insisting this isn't real programming.
Those two folks have been wrong at every transition for fifty years.
They're going to be wrong about this one too.