I spent the early part of 2025 watching Claude Code make some frustrating decisions multiple times. Wrong architecture choices. Scattered logic. Try-catch blocks everywhere. The code worked, technically. But it wasn't how I would have built it.
So I did what everyone does. I wrote better prompts. More detailed instructions. Clearer specifications. And the output improved, slightly, before drifting right back to the same patterns I didn't want.
Then I realized the problem wasn't the AI. It was me.
The Prompt Obsession
Here's what most people believe about AI-assisted development: if the output isn't great, you need better prompts. More context. Clearer instructions. Maybe a different model.
And there's some truth to that. Vague prompts produce vague results.
But here's what I've learned after building multiple SaaS applications with AI assistance: prompt engineering is table stakes. It's not the differentiator. The people getting genuinely useful AI output aren't writing longer prompts. They're sharing something else entirely.
They're sharing their decision-making.
What You're Actually Withholding
Think about how you actually work. When you make a technical decision, you're drawing on years of experience, past failures, strong opinions, and hard-won preferences. You know that authentication belongs in root middleware because you've debugged the nightmare of scattered auth checks. You know that business logic should throw errors rather than catch them because you've been the person at 2 AM trying to figure out why failures were silently swallowed.
Now think about what you give an AI. “Build me a user authentication system.” Or if you're being thorough: “Build me a user authentication system using JWT tokens with refresh token rotation.”
You're asking for a solution. You're not sharing the framework you'd use to evaluate that solution.
The AI doesn't know that you believe proxies should be thin. It doesn't know you design for the 2 AM production incident. It doesn't know your complexity budget or your error handling philosophy. So it makes reasonable default decisions that aren't your decisions.
And then you spend hours refactoring.
The Decision Framework Shift
Here's what changed for me. I stopped trying to describe what I wanted built. I started describing how I make decisions.
I created a decision guide, a document that articulates my actual thinking:
Where logic belongs:
| Logic Type | Where It Lives |
|---|---|
| Authentication | Root middleware, verify once |
| Authorization | API middleware, check at boundaries |
| Business logic | Workers, throws errors, doesn't catch them |
| Recovery logic | Separate concern, cron workers, not inline |
The thin proxy rule. Each proxy does exactly four things: extract request data, forward to feature worker, log usage, return response. If your proxy is doing validation, business decisions, or transformations, it's too thick.
Error handling philosophy. Business logic throws, it doesn't catch. Boundaries catch and handle appropriately. Recovery is a separate concern.
Complexity budget. Functions under 50 lines. Classes under 200 lines. Files under 500 lines. If you can't explain a component in one sentence, it's too large.
The ultimate test. Design for the 2 AM production incident where you're half-asleep, users are angry, and you need to understand what's wrong and fix it quickly.
This isn't a prompt. It's a decision framework. And when I share it with Claude Code, something shifts. The AI isn't just generating code. It's generating code that reflects my judgment about how code should be structured.
Want to see an example of this decision-making in markdown?
Why This Works
The difference between giving AI a prompt and giving AI a decision framework is the difference between telling someone what to build versus teaching them how you think.
Prompts are transactional. They describe an outcome. The AI has to guess at thousands of micro-decisions along the way.
Decision frameworks are philosophical. They describe a way of reasoning. The AI can apply that reasoning to whatever it encounters.
When my decision guide says “design for the 2 AM production incident where you're half-asleep and users are angry,” that shapes every choice: error messages get clearer, components get smaller, recovery becomes automatic. Not because I specified each of those outcomes, but because I shared the principle that generates them.
What Shifted
After I created my decision framework, three things happened.
First, the code Claude generates now matches how I would have structured it. Not perfectly, but close enough that refactoring dropped significantly. The AI knows that proxies should extract, forward, log, and return, nothing more. It knows that recovery is a separate concern from error handling.
Second, I found myself thinking more clearly about my own preferences. Writing down “functions should be under 50 lines” forced me to actually commit to that. Articulating when to use Claude versus a faster model made me examine my own reasoning. The document became as valuable for me as for the AI.
Third, I can hand this framework to a new team member or collaborator and they immediately understand how I think about these problems. It's not just AI documentation. It's decision documentation.
The Challenge
Most people using AI are focused on the wrong lever. They're optimizing prompts when they should be articulating decisions.
Here's what I'd challenge you to do:
Learn to think about your own decision-making. Not what you decide, but how you decide. What principles do you actually operate by? What rules have you internalized that you've never written down?
Learn to articulate it. Get specific. “Clean code” isn't a decision framework. “Functions under 50 lines, files under 500 lines, one responsibility per module” is a decision framework.
Share it with your AI in a way it can use. This might be a markdown file. It might be a structured document. But it should be something the AI can reference consistently, not something you re-explain in every prompt.
The magic happens when the AI isn't just executing your instructions but reasoning the way you reason. When it makes the same decision you would have made, not because you specified it, but because you taught it how you think.
Decision-making is the one thing you can't fully outsource. But you can absolutely share it.