How Estimating Changes in an AI World

The other day I got an email in reply to one of my recent articles. The question was simple – given how much AI is changing everything, how do I suggest we think about estimating.

Estimating, particular on software projects, has always felt like 50% art, 50% experience, and 50% magic. See what I did there? 🙂

My answer to them, which I'm elaborating on in today's post, was simply this: estimating was always about risk more than anything else. But that doesn't mean people were treating it that way. People were adding padding to mitigate risk, rather than figuring out what would make an estimate more or less accurate.

But in today's AI-infused world, some things aren't worth estimating at all. But I'm jumping the gun… Let's back and walk thru this in a more structured way. Of course, you can skip all my thoughts and just go down to the interactive calculator that I used AI to help me build – just for you.

The Old Way: T-Shirts and Trust Falls

If you've ever been in a room where someone said “I'd call that a medium,” you know the drill. T-shirt sizing. Story points. Planning poker. Velocity charts. We built an entire vocabulary around the idea that we could predict how long software would take to build.

But here's the thing nobody said out loud: none of it was actually about prediction. It was about managing the fact that we couldn't predict. The padding wasn't a bug in the process — it was the whole point. You'd estimate 3 weeks, tell the client 5, and hope you landed somewhere in between.

And it kind of worked! Not because the estimates were good, but because the buffer absorbed the surprises. The edge case nobody thought of. The API that didn't behave like the docs said it would. The meeting where someone changed the requirements halfway through.

The real cost wasn't in the padding itself. It was that everyone treated all work like it carried the same amount of risk. A simple login page got the same estimation process as a complex data pipeline. Everything went through the same sizing ritual, got the same multiplier, and came out the other end with roughly the same level of uncertainty baked in.

What Was Actually Going On

When you strip away the process theater, estimating was really answering one question: how much do we not know?

The coding itself was rarely the mystery. A decent developer could always tell you roughly how long it takes to wire up an endpoint or build a form. The uncertainty lived in everything around the code — ambiguous requirements, unfamiliar systems, integration surprises, and the ever-present “oh wait, that's not what I meant” from clients.

In other words, the risk wasn't in the work. It was in the understanding. And no amount of story points could fix that.

Enter AI: Faster Hands, Same Brain

Here's where it gets interesting. AI has dramatically changed how fast certain work gets done. Things that used to take a developer two days — boilerplate code, standard integrations, test generation, data transformations — can now happen in minutes.

But AI didn't change how fast you understand things. It didn't make ambiguous requirements clearer. It didn't resolve the debate about whether the feature should work this way or that way. It didn't eliminate the “we forgot about this edge case” moment.

So what you end up with is a split. Some work got radically faster. Other work stayed exactly as hard as it always was. And the old estimation model — which treated everything the same — completely breaks down when you have that kind of gap.

A Better Way to Think About It

Instead of sizing work by how big it feels, size it by how well you understand it. I think of it as four buckets:

Clear Pattern — you can describe it to AI in a single prompt and get back exactly what you need. This work barely needs an estimate anymore. Just do it.

Familiar Territory — you know the domain, but there are some unknowns in the details. AI speeds this up a lot, but you'll still hit a few walls.

Exploratory — you're combining known pieces in new ways. AI helps with parts of it, but you're doing real problem-solving.

Unknown Territory — you're not even sure what to build yet. The requirements are fuzzy, the architecture is unclear, and AI can't save you from not knowing what you want.

The magic is in classifying first, then estimating. And honestly? For a solo operator working with AI, anything in that first bucket isn't worth estimating at all. The cost of being wrong about a 20-minute task is basically zero. Reserve your estimation energy for the stuff that's genuinely uncertain.

Try It Yourself

I built a calculator that puts this framework into practice. Plug in your tasks, classify each one by uncertainty type, and it'll show you the difference between a traditional estimate and an AI-adjusted one. It even gives you a contextual insight based on how your project breaks down.

Try it out, and let me know if that helps you.