I’ve spent twenty-five years building software for people. Interfaces they’d click through. Dashboards they’d stare at. Onboarding flows designed to hold their hand from signup to that first moment of value. I know how to build for humans. I’ve done it at scale.
And I’ve mostly stopped.
Not because I’ve lost interest in people — the opposite, actually. I stopped building for humans because I realized the fastest way to help them is to build for something else entirely.
I’m building products for AI.
The Shift Nobody’s Talking About
Here’s what I mean. I recently built a product called YourVoiceProfile. You answer thirteen questions, share a few writing samples, and it generates a detailed document that captures how you write — your rhythm, your vocabulary patterns, your tone shifts across platforms, your verbal tics and tendencies.
Now, who uses that document? Not you. You already know how you sound. The document exists so that Claude, or ChatGPT, or whatever AI tool you’re working with can sound like you when it writes on your behalf.
The customer is human. The user is AI.
That might sound like a semantic trick, but sit with it for a minute. It changes what you build, how you build it, and what “quality” even means.
I have a second product – on Audience Segments. Same idea. You work through a structured process and end up with a detailed artifact that describes who you’re trying to reach. Their pain points, their language, their objections, what they care about on a Tuesday afternoon versus a Sunday night. Rich, specific, actionable.
But actionable for whom? Not for you. You already know your audience — or you should. That artifact exists so an AI agent can make targeting and messaging decisions on your behalf without you having to re-explain your audience every single time.
And now I’m building a third piece: a Content Agent. This one is a skill graph — a structured map of frameworks, methods, and creative processes that an AI agent can traverse when it’s creating content. It pulls from your voice profile so the output sounds like you. It references your audience segments so the content is aimed at the right people. And it follows the skill graph so the work isn’t just competent but methodical.
Three products. None of them are designed to be “used” by a human in the traditional sense. No one opens their voice profile for a pleasant read. No one studies their audience segments for fun. These are artifacts built so an AI agent can do better work on your behalf.
What This Changes About Product Design
For two decades, software product design has revolved around one question: can the human figure out how to use this?
We obsessed over it. We A/B tested button colors. We agonized over whether the settings icon should be a gear or three dots. We wrote entire books about reducing friction in user interfaces. And that made sense, because the human was the operator. If they couldn’t navigate the software, nothing happened.
But when the operator is an AI agent, the design question flips. It becomes: can the agent work effectively with this artifact?
A well-designed voice profile isn’t one that reads well — it’s one that an LLM can faithfully reproduce in output. A well-designed audience segment document isn’t nicely formatted — it’s structured so an agent can make nuanced targeting decisions without ambiguity. The quality bar shifts from human-readable clarity to machine-consumable fidelity.
Think about what this means for how you evaluate your own work. You used to ask: is this intuitive? Is the learning curve gentle? Does the user feel delighted? Those are fine questions when a human is operating the software. When an AI agent is the operator, you ask different things: is this artifact complete enough that the agent won’t have to guess? Is it structured so the agent can make decisions without ambiguity? Are the boundaries between components clean enough that the agent can compose them without confusion?
The Part Most People Will Struggle With
Here’s where this gets uncomfortable. If you want to build products this way, you need a skill that most of us — myself included — have had to fight to develop.
Abstraction.
When I built YourVoiceProfile, I couldn’t think about it as “a tool someone uses to describe their voice.” I had to decompose “voice” into its component properties — linguistic patterns, tonal ranges, vocabulary tendencies, platform-specific adaptations, rhythm and cadence markers — and then structure those properties in a way a machine could consume and act on.
That’s an abstraction exercise. You’re taking something intuitive and felt (how a person sounds when they write) and turning it into something structured and operational (a set of attributes an agent can apply).
Most people can’t do that yet. Not because they’re not smart enough — they are. But because they’ve spent their careers thinking in interfaces. Screens. Buttons. User flows. They think about what a person will see and click. Thinking about what an agent needs to receive and act on is a fundamentally different kind of exercise.
It’s like the difference between being a great cook and being someone who can write a recipe that a thousand different cooks could follow. The cook relies on instinct, on tasting as they go, on years of feel. The recipe writer has to take all of that and decompose it into ingredients, ratios, temperatures, and timing. Both require mastery. But one requires abstraction — the ability to separate the knowledge from the knower so it can travel.
And the precursor to abstraction is systems thinking. Before you can decompose “voice” into an artifact, you have to see the system. Content goes somewhere, for someone, in a certain voice, following certain structures. You have to see those relationships — see them as separate, composable parts — before you can extract any single part into an artifact that an agent can use independently.
This is where I’d invite you to look at your own work differently. Whatever you do — whether it’s coaching, consulting, building products, running a team — there’s a system underneath it. You might not have named the parts yet. You might be running the whole thing on instinct and experience. That’s fine. But if you want to build something an AI agent can operate inside, you’ll need to see the system clearly enough to pull it apart.
I decomposed content creation into three abstractions: voice (how it sounds), audience (who it’s for), and method (how to combine them). That decomposition is the hard part. Once you have it, the code is easy. But most people skip straight to the code and wonder why their AI integrations feel shallow.
Humans Never Wanted Software
Here’s the thing nobody in our industry likes to admit: humans never wanted software. Not really. They wanted outcomes. They wanted the report written, the audience reached, the curriculum delivered, the knowledge captured. Software was just the only available path to those outcomes.
It isn’t anymore.
Now the AI agent is the path. And if that’s true, then the products people will pay for aren’t tools they’ll use — they’re equipment that makes their AI agent more capable. You’re not selling software to a person. You’re selling a better-equipped AI to a person.
That’s what has me most excited right now. And I think within a few years, it’ll be where a lot of us spend our time — whether we’ve made the mental shift yet or not.
I’m not saying every product will work this way. There will always be software that humans operate directly. But there’s a growing category of products — and it’s larger than most people realize — where the right move is to stop designing for the human and start designing for the agent that serves the human.
The question is whether you can think in systems, decompose those systems into abstractions, and build artifacts that make an agent excellent at serving the human who hired it.
That’s not a coding problem. It’s a thinking problem. And the ones who figure it out first will build the next generation of products while everyone else is still redesigning their onboarding flows.