# The Extraction Pattern: How Companies Actually Start Using AI
*Published: 2026-04-01*
*Tags: ai, insights*
*Source: https://chrislema.com/ai-adoption-pattern*
---You’ve heard the argument: [the bottleneck in AI isn’t the technology](https://chrislema.com/the-hardest-thing-in-ai-right-now-isnt-ai). It’s the gap between what your team knows and what they’ve written down. The rubrics, the quality criteria, the standards that live in people’s heads, those are what make AI feedback loops actually work.

But knowing that doesn’t tell you how. What does the extraction process actually look like inside a real company? What’s the sequence? Where does it start, where does it stall, and what does it look like when it works?

I’ve been working with companies, mostly digital agencies and software companies, helping them adopt AI. Not by picking tools or configuring agents, but by helping them get their expertise out of their heads and into a form AI can use.

Every company navigates the same phases, different details, but the shape is the same.

## The Seven Phases

Before I walk through each phase, here’s the overview.

Companies move from scattered experimentation to a single champion, from “playing” with AI, into knowledge extraction, and then to a moment when people see AI differently. That shifts the interest, moving into greater extraction, and finally compound effects where skills, artifacts and outputs change what the company can do.

The sequence matters. You can’t skip phases. And the companies that stall are almost always stuck in the first one.

## Phase 1: The Scattered Experiment

Every company starts here. Multiple people, all trying different AI tools, at different speeds, in different parts of the business. Someone in marketing is using ChatGPT to draft blog posts. Someone in engineering is using Copilot. Someone in sales heard about a proposal tool and signed up for a free trial.

The results are mixed. Inconsistent. Non-reproducible.

And the collective conclusion is almost always the same: “AI isn’t that useful.”

But that conclusion is wrong. What’s actually happening is that everyone is measuring AI against the old paradigm, expecting it to behave like a faster version of the tools and people they already had. When it doesn’t slot into existing workflows the way a new hire or a better piece of software would, they dismiss it.

But no new tool works in a new paradigm the way old tools worked in the old one. Trying to evaluate it that way produces nothing but cognitive dissonance. The problem isn’t that AI is bad. The problem is that your team is using it without giving it the one thing it actually needs to produce quality output.

What it needs is the criteria your best people carry around in their heads.

## Phase 2: The First Champion

In small and medium businesses, AI adoption doesn’t start with a strategy deck or a committee. It starts with one person.

Someone willing to move past the scattered experiment and approach AI with some structure. Not a prompt engineer. Not necessarily the most technical person. Just someone willing to do the work of figuring out how to get consistent, quality output.

That person becomes the proof point. And what makes them the proof point isn’t that they found a better tool. It’s what they did before they ever opened the tool.

## Phase 3: The First Extraction

This is where the real work begins. The champion, sometimes on their own, sometimes with a guide, sits down and starts answering questions about their domain.

Not questions about AI. Questions about their work.

What does good look like? What makes you reject something? If you were reviewing this output, what would make you hand it back? What are the criteria you apply without thinking about it?

The process can take different forms. Sometimes it’s a structured interview. Sometimes it’s a set of written questions they answer on their own. Sometimes they hand their answers to an LLM and iterate, answering more questions, refining, pushing deeper, until what comes out is a document that captures how they think about quality in their domain.

That document is the artifact. A rubric, a voice profile, a scoring framework, a set of guardrails. The specific form depends on the work. What matters is that it externalizes knowledge that previously lived only in someone’s head.

This is hard. Not technically hard. The mechanics are simple. It’s hard because most people have never had to articulate the standards they apply instinctively. They know a good proposal when they see one. They know effective copy when they read it. They’ve never had to write down why.

## Phase 4: The Capability Moment

Something happens when that first artifact meets an LLM. The output changes. Not incrementally. Categorically.

The person sees the AI produce something they couldn’t have produced on their own. Not “faster than I could have done it.” Not “pretty good for a machine.” Something that genuinely expands what they’re capable of creating.

Picture this: a senior strategist who spent a decade building proposals by hand feeds their extraction document into an LLM. The first output that comes back isn’t just acceptable. It contains connections between ideas that the strategist hadn’t made, framed in their own voice, structured the way they would have structured it, but with a synthesis they wouldn’t have reached alone.

This is the moment that matters. Because it rewires how they think about the relationship between their expertise and the tool. They stop evaluating AI as a replacement for their labor and start seeing it as an amplifier for their judgment. [AI doesn’t replace expertise. It makes expertise portable.](https://chrislema.com/claude-code-skills-for-video-editing)

Their role shifts. They’re no longer the person who does the work. They’re the person who knows what good work looks like, and who has encoded that knowledge into something a system can act on.

This is a one-way door. Nobody goes back.

## Phase 5: The Audience

Once the champion produces visible results, the artifact itself becomes the evangelism. Nobody needs to pitch AI adoption. People see the output and ask how it was made.

This isn’t a slow process. When someone in the company watches a colleague produce a proposal, a content piece, a case study, or a project architecture that’s noticeably better than what the team was producing before, and learns it came from feeding a rubric into an LLM, the interest is immediate.

The artifact is the proof. Not a demo, not a slide deck, not a vendor pitch. An actual piece of work, produced by a colleague, that’s better than what anyone expected.

## Phase 6: The Depth Discovery

This is where most people expect the pattern to be “now we roll it out across the company.” It isn’t. What happens next is that the first artifact turns out to be insufficient.

Not wrong. Just not deep enough.

The initial extraction captured the obvious quality criteria, the things the champion could articulate on the first pass. But as they push the AI into adjacent territory, they discover gaps. The rubric covers content creation but not voice consistency. It handles proposals but not case studies. It generates code but doesn’t address UI patterns.

Every solved problem reveals the next layer of tacit knowledge that hasn’t been externalized yet.

I keep seeing this across every engagement:

One digital agency started with content focused on audience and content pillars. Then they discovered they needed a voice profile. Then they needed platform-specific distribution guidance. Each artifact they built revealed the adjacent knowledge that was still missing.

Another agency started with proposals and quotes, ingesting nine previous examples to build the resource for new ones. Once that worked, they pushed into case studies, then additional supporting artifacts.

A third agency built the architecture for a new kind of AI-powered automation project, then created a factory for replicating it. But when they rolled it out across clients, they found each client’s technology stack required its own set of rubrics.

A fourth company started with code generation frameworks, then went back to build UI rubrics, then deployment rubrics.

The specifics are different every time. The dynamic is identical: extraction is not a one-time event. It’s a chain reaction where each artifact you build reveals the adjacent knowledge that still needs to be externalized. What you end up with isn’t a collection of isolated documents. It’s a [skill graph](https://chrislema.com/you-dont-need-a-platform-to-sell-your-expertise-you-need-a-skill-graph), an interconnected set of artifacts where the connections between them are as valuable as the artifacts themselves.

## Phase 7: The Compound Effect

Here’s where it gets interesting. As the company builds more artifacts, deeper rubrics, broader frameworks, more specialized guardrails, something compounds.

The team ends up with three layers.

**Skills.** The ability to use LLMs effectively. Not prompt tricks, but the understanding of how to direct AI with structured criteria.

**Artifacts.** The rubrics, voice profiles, scoring frameworks, and guardrails that encode their domain expertise. This is what separates “I can use ChatGPT” from “I can consistently produce quality work with AI.”

**Outputs.** The actual work product, proposals, content, architectures, code, that’s qualitatively different from what the team could have produced without the system.

The artifacts are the multiplier. Skills without artifacts produce the same inconsistent results everyone got in Phase 1. Artifacts without skills sit unused. Together, they produce outputs that expand what the company is capable of doing.

And here’s what makes each phase easier than the last: the team gets better at extraction itself. The first artifact takes weeks of iteration. The fifth takes days. By the tenth, people know what questions to ask, what criteria to capture, and how to structure the document so an LLM can use it. The process of externalizing expertise becomes a skill of its own.

## Where You Probably Are Right Now

If you’re reading this, you’re most likely stuck in Phase 1 or watching your company spin there. Multiple people experimenting, inconsistent results, a growing suspicion that AI “isn’t ready.”

The move isn’t to find a better tool. The move is to find your first champion and help them through their first extraction. That means sitting down with the person on your team who has the strongest instincts about what quality looks like in their domain, and helping them answer this question: “What do you know that you’ve never written down?”

The answer to that question is your first artifact. And your first artifact is what turns scattered experiments into a system.

## How to Know If It’s Working

You’ll know you’ve left Phase 1 when you stop hearing “AI isn’t that useful” and start hearing “how did you get it to do that?”

You’ll know you’ve hit Phase 4 when someone produces work that surprises even them, not because the AI is smart, but because their externalized criteria gave it something meaningful to work with.

You’ll know Phase 6 has started when the team stops asking “should we use AI for this?” and starts asking “what rubric do we need to build for this?”

And you’ll know the compound effect is real when new hires, people who weren’t part of the original extraction, start producing work at a level that would have taken them years to reach without the artifacts.

The companies that stall are the ones that keep experimenting. They keep switching tools. They keep concluding that AI isn’t ready.

The companies that break through are the ones that stop evaluating the tool and start extracting what they know.

The technology is ready. The question is whether your team has done the hard work of writing down what “good” looks like. Because that’s the only thing the AI can’t figure out on its own.
