The Best Thing About AI Isn’t Productivity. It’s Permission.

Insights

When I was nine years old, my dad sold stock to buy our first computer. This was 1979, and it was a huge purchase for a device that let him put our phone in a dock, tell the neighbors not to call, and work from home in the evenings instead of driving back to the office, where the rooms were huge and filled with gigantic computers.

My brother and I never got an Atari. Instead, we got Byte magazine and others like it. They had centerfolds in the middle. Not naked pictures, code. Pages and pages of it. And if you typed every line correctly, you could have your own Oregon Trail on your own floppy, running on your own PC.

We learned Wordstar. Then Multimate. Then MS Word. We wrote our school drafts on that computer while other kids were still using typewriters.

When I say “others,” I don't just mean people in the neighborhood. Sure, they were using typewriters too. But I mean my mom. She wouldn't come near the PC. For years, like five to ten years, she would bring her typewriter to the kitchen table and type on it, never going near the computer.

Why? What was her hesitancy?

She was afraid that if she typed something wrong into this expensive hardware, she might break it. So she stayed clear of it, no matter how much we told her it would be fine and that she couldn't break it.

I think about my mom a lot these days.

Because AI is changing things at a pace that makes the PC revolution look leisurely. And most people are sitting in a spot, worried that they might break things. Maybe they're not scared of breaking their LLM. But they've sent me emails asking if it might be a good or bad thing to give their credit card to an AI agent. They worry about cost, about privacy, about looking foolish. They ask me if they should ChatGPT their tax returns.

Here's what I know. The fear is real. But it's pointed at the wrong thing.

The risk isn't that AI will break something. The risk is that fear keeps you from trying something.

I Found What I Wanted. I Just Couldn't Use It.

A few weeks ago, I was building a multi-agent system. If you're not technical, just think of it as several AI workers that collaborate on a task, each handling a different piece. I had a working demo. It ran on Cloudflare Workers, which is my home base for building things.

But then I stumbled across a platform called Jido (by Mikee Hostetler). It was built for exactly the kind of multi-agent work I was doing, but with a fundamentally different architecture. Where my system processed things in sequence, one step after the next, Jido could run agents truly in parallel. Multiple agents observing, reacting, and collaborating at the same time. Not faking concurrency. Actually doing it.

There was just one problem.

Jido is written in Elixir, running on something called the BEAM virtual machine. I don't know Elixir. I've never written a line of it. I don't know what the BEAM is or why it matters. I couldn't explain an OTP supervisor tree if you paid me.

In the old world, that's where the story ends. You bookmark it. You tell yourself you'll learn Elixir someday. You add it to the ever-growing list of things you'll get to when you have a free month, which is never. And you go back to what you know.

That's exactly what my mom did with the PC. She knew the typewriter. It worked. Why risk the unknown?

But This Time, I Didn't Go Back

Instead, I pointed an LLM at Jido's documentation. All of it. The framework docs, the architecture guides, the code examples.

Then I pointed it at my existing code. The working system I'd already built, written in a language and framework I understood.

And I said, essentially, “You now know both worlds. Translate.”

The LLM already understood my code, because I'd been building with it. It learned Jido's patterns from the docs I'd given it. And it bridged the gap between the two, the gap that would have taken me months to cross on my own.

Within days, not months, I had a working port. Not a toy. A functional system running on a completely different platform, in a language I still can't write from scratch.

Let me be clear about what happened here. I didn't become an Elixir developer. I couldn't sit down right now and write Elixir without AI help. That's not the point.

The point is that I didn't need to become an Elixir developer to use Elixir.

This Is What Everyone Gets Wrong About AI

The dominant conversation about AI is about productivity. How much faster can you write that report? How many emails can you draft in an hour? How quickly can you summarize that document?

Those are fine questions. But they're small questions.

The bigger question is: what are you now willing to try?

Productivity assumes you already know what to do and AI helps you do it faster. That's useful but it's incremental. It takes your existing path and puts you on a faster treadmill.

Permission is different. Permission means you can explore paths you never would have walked down. Not because you couldn't, in theory, learn Elixir. But because the cost of learning it, the weeks of tutorials, the frustrating error messages, the months before you're productive, made it a bad bet. The scaffolding required to even start was too high.

AI doesn't just lower the scaffolding. It removes it.

You can try a new language without committing to it. Explore a new framework without mastering it first. Test an idea in a domain you don't know, using tools you've never touched, and find out in days whether it's worth going deeper.

The cost of curiosity just dropped to almost zero.

What I'm Actually an Expert On Hasn't Changed

Here's the thing that surprised me most. My Elixir experiment didn't dilute my expertise. It sharpened it.

I still had to know what a multi-agent system should do. I still had to understand the architectural tradeoffs between sequential and parallel processing. I still had to evaluate whether Jido's approach was actually better for my use case, or just different.

AI handled the translation. The “how do you write this in Elixir” part. The syntax. The framework conventions. The runtime configuration.

But the thinking? That was still mine.

This is what I mean when I say my expertise can stay focused. I don't need to become a generalist who knows a little bit about twelve languages. I can stay deep in the things I'm deep in, the architecture, the product decisions, the business logic, and use AI to extend my reach into unfamiliar territory when I need to.

I'm not an expert in Elixir. But I'm an expert in what I needed Elixir to do.

My Mom Eventually Came Around

It took years. But eventually my mom sat down at the PC. She didn't break it. She never was going to break it. And once she realized that, she wondered why she'd waited so long.

The people emailing me about AI, the ones worried about giving a credit card to an agent or feeding their tax returns to ChatGPT, they're not wrong to be cautious. Caution is healthy. But there's a difference between caution and paralysis.

The best thing about AI isn't that it makes you faster at what you already do. It's that it gives you permission to try what you've never done. To explore without the scaffolding. To be curious without the risk.

You're not going to break it.

And the only real risk is waiting too long to find that out.