# The 4 scans I run before I'm done with any AI-assisted project
*Published: 2026-05-14*
*Tags: ai, product-work*
*Source: https://chrislema.com/four-scans-ai-assisted-projects*
---I told you the other day in my [post about using Lovable for a cardboard model prototype](https://chrislema.com/lovable-cardboard-model-problem) that I'd be explaining more about how that build went.

But I got an email yesterday, and it was a question about the "right" prompts. So I promised an answer and I'll circle back to that other post in a couple of days.

Today I'm going to start where my email started, but take you deeper into the way I think about things.

There are no magic words. There are no special phrases. But there are key processes I regularly use.

Here's what I mean.

## Why the first pass isn't the finish line

When you build with AI, your first pass looks done. The functions are written. The endpoints work. The tests you asked for pass. You ran it locally, hit the happy paths, and everything responded the way it should.

And right there is the temptation. Commit. Deploy. Move on to the next feature. I've felt that pull on every project I've shipped, and I had to learn the hard way that the first pass is rarely the finish line.

I've been building a headless CRM, agentic software that has to handle things like contact updates, scheduled tasks, and concurrent agent calls. And the more I built, the more I noticed something. The first pass tells you almost nothing about whether the system is actually ready. The happy paths are fine. The problems are buried somewhere else, and they don't surface until the system is under load, or running for a while, or doing the same thing twice.

So I started running four scans after every first pass. As a checklist, not a prompt template. The same four questions every time, regardless of what I just built.

Each one finds between one and five issues. That's not a big number. But here's the thing. If I wasn't actively looking for these, I could go months without discovering them. Things would work. And then suddenly they'd break, and I'd be standing in the wreckage, dealing with symptoms instead of causes.

That's the leverage right there. The pre-emptive look matters more than any prompt I could give you.

## The 4 scans

**Scan 1: Race conditions.**

I ask the AI to scan the project for code that could be open to race conditions. Anywhere two operations could run at once and step on each other. Anywhere shared state could be read and written in an order that produces a different result than intended.

This one almost always finds something in agentic code. Because agents, by definition, do things in parallel. They take actions, they make calls, they update state. And the moment two of them touch the same record at the same time, you have a race condition that's easy to miss because your test only fired one agent.

**Scan 2: Concurrency and scale.**

I ask the AI to scan for places where high concurrency traffic could be an issue. Not "will this work for one user" but "what happens when fifty agents hit this at the same time, and a hundred, and a thousand."

Most of what comes back is honest. Places where a synchronous call should be queued. Places where a database connection isn't being released. Places where a loop will silently get slower as data grows. The first pass code is rarely written for scale. It's written to work, and that's a different thing.

**Scan 3: Idempotency.**

I ask the AI to scan for functions that, if run multiple times, will create duplicate records. The opposite of idempotency.

This one's easy to underweight. In agentic systems, retries happen. Webhooks fire twice. Networks drop and reconnect. If your "create contact" function isn't idempotent, every retry creates a phantom record, and you're debugging data hygiene three weeks later wondering how the same person ended up in your system four times.

**Scan 4: Dead code.**

I ask the AI to scan for code that's never called. Functions that were written and then abandoned. Routes that were defined and then forgotten. Utilities that got refactored out but the original still lives in the codebase.

Some of it needs to be wired in. Some of it needs to be removed. Either way, dead code is a tax. It confuses the next person who reads the codebase, including future you, and it lets bugs hide in places nobody is paying attention to.

## Why this is the actual leverage

None of these scans is clever. A senior engineer reviewing a PR would ask the same four questions. The leverage is in having the four questions ready before you need them, so you ask them every time instead of remembering on the projects where something already broke.

That's the move I make on my projects and the summary of what I wrote back. I didn't send him magic words. I sent him four questions worth holding onto. Your judgment about *what to look for* is the whole game. The prompt is just the delivery mechanism.

So if you're building with AI right now, especially anything agentic, here's what I'd actually do. Finish your first pass. Then before you commit, run these four scans. Treat them like the spell-check of architecture. They take half an hour. They protect you from months of dealing with symptoms.

The real leverage lives in the habit of looking before things break.
