April 9, 2026
How to Use Process Mining to Find What AI Should Automate
Your ticket system already records every escalation, every reopen, every bottleneck — with timestamps. Process mining turns that data into a map of exactly where AI automation will have the most impact. No interviews. No opinions. Just the data.
Every team I've talked to in the last year wants the same thing: use AI to make things faster. Take support: they want the same things. Fewer escalations. Fewer reopened tickets. Faster resolution times. Less wasted effort.
But when I ask how they plan to figure out where to apply AI, the answer is almost always the same: "We'll talk to the team leads and map out the process."
That's the wrong starting point. Not because your team leads are wrong — they're not. They know a lot. But they know what they see, which is their piece of the process, filtered through their experience, biased by recency and repetition. They don't see the full picture. Nobody does. That's not a people problem. It's a visibility problem.
Your ticket system sees the full picture. It records every status change, every assignment, every escalation, every reopen, every resolution — with timestamps. It's been quietly building the most complete, unbiased record of how your operation actually runs.
You just have to ask it the right questions.
Process Mining: The Diagnostic Before the Treatment
Process mining is a discipline that takes event log data from your systems and reconstructs the actual process flows — not the documented ones, not the assumed ones, but the real ones. Every detour, every bottleneck, every loop.
It's been around since the early 2000s, originally an academic discipline out of Eindhoven University of Technology. But the tooling has matured to the point where you can go from a SQL query to a visual process map in a few hours.
Here's what that looks like with a ticket system.
A ticket system — whether it's Zendesk, Jira, ServiceNow, or something homegrown — naturally records the three things process mining needs: a case ID (the ticket number), an activity (what happened), and a timestamp (when it happened). Pull those three columns out of your database, hand them to a process mining tool, and it builds the map for you.
I ran this on a simulated dataset of 200 support tickets to show what comes out the other side. The raw data is just rows in a table — ticket created, assigned to agent, initial review, escalated, waiting on customer, resolved, closed, reopened. Standard lifecycle stuff.
[Process flow diagram from heuristics mining — image to be added]
But when you run discovery algorithms on that data, you get something like this:
- 5 distinct paths that tickets actually follow through the system
- 35% take the happy path — created, assigned, reviewed, solved, confirmed, closed. Six steps, done.
- 20% hit the escalation path — Tier 1 can't resolve it, sends it to Tier 2, adding days to the cycle
- 20% stall waiting on the customer — the agent needs information the customer didn't provide upfront
- 13% get reopened — resolved, closed, then the customer comes back because the fix didn't hold
- 12% take the nightmare path — a combination of waiting, escalating, waiting again, and bouncing between teams
The average resolution was 3.6 days. The fastest was 3 hours. The slowest was over 13 days.
None of that came from interviews. None of it came from team leads describing their process on a whiteboard. It came from the data the system was already collecting.
That's the diagnostic. Now you know where to operate.
Where AI Actually Helps (And Where It Doesn't)
Once you can see the real process, the automation targets become obvious. Two of them stood out in this data: the escalation rate and the reopen rate.
Cutting Unnecessary Escalations
36.5% of tickets escalated to Tier 2 in the simulation. In a real system, you'd look at that number and ask: how many of those actually needed a specialist, and how many escalated because the Tier 1 agent didn't have the right information at the right moment?
The AI intervention sits between the agent and the escalation button. When an agent is about to escalate, the system does three things.
First, it classifies the issue against your resolution history. If Tier 2 resolved 40 similar tickets last quarter and the fix was the same every time, that resolution can be surfaced to the Tier 1 agent before they escalate. Not a generic knowledge base article — a specific resolution pulled from cases that match this one. "The last 12 tickets with this error code were resolved by resetting the API key in the admin panel. Here's the procedure."
Second, it checks whether the ticket has enough information for Tier 2 to act. A massive source of wasted time is incomplete escalations — the ticket arrives at Tier 2 missing logs, missing account details, missing reproduction steps. Tier 2 sends it back. Two more days gone. The AI reviews the ticket against a checklist of what Tier 2 typically needs for that category and prompts the Tier 1 agent to collect it first. One pass, not three.
Third, it routes intelligently. Not all Tier 2 agents handle the same issues. If one specialist resolves 90% of the billing escalations and another handles the technical ones, route by match, not round-robin.
None of this is speculative. The resolution history already exists in the ticket system. The escalation patterns already exist in the process mining output. The AI is connecting information that's already there — it's just connecting it at the moment of decision instead of after the fact.
Preventing Reopens
The 12% reopen rate is a quality problem, not a volume problem. Tickets reopen because the resolution didn't actually solve the customer's issue. Either the agent misunderstood the problem, provided a partial fix, or the customer confirmed "yes, it works" before actually testing it.
The AI play here is a quality gate before closure. When an agent marks a ticket as resolved, the system reviews the full conversation thread and checks for specific risk signals.
Did the customer's original complaint actually get addressed, or did the conversation drift to a different issue? Is the resolution a known temporary workaround rather than a root cause fix? Is this a ticket category with a historically high reopen rate? Did the customer confirm with specifics ("I tested the export and it generated the file correctly") or with vague acknowledgment ("ok thanks")?
This is an LLM-as-a-judge pattern — the AI scores the resolution quality before the ticket closes. Not a binary block, but a confidence score. "This resolution addresses the reported issue with high confidence" versus "Warning: the customer reported a sync failure but the resolution only addressed login credentials. Reopen risk: high."
The training data for this judge is already sitting in the ticket system. Every reopened ticket is a labeled example of what a bad resolution looks like. Every ticket that stayed closed is a positive example. You don't need to manufacture training data. The process mining already identified which tickets reopened and what their conversations looked like before they closed the first time.
The Sequence Matters
There's a reason I started with process mining and not with AI. The temptation is to skip straight to automation — "let's build an AI agent that handles tickets." But automation without diagnosis is expensive guessing.
Process mining gives you the evidence-based map of where time and money are being wasted. It turns opinions into numbers. "I think escalations are a problem" becomes "36.5% of tickets escalate, adding an average of 4 days to resolution, costing us X per unnecessary escalation per quarter."
That number tells you where to build. It also tells you what not to build. If only 2% of tickets take a certain path, automating that path is a bad investment no matter how interesting the technical challenge is.
The sequence is:
Mine the process. Pull event logs from the system, run discovery, see what's actually happening. This takes days, not months.
Quantify the waste. The process map shows you the paths. The performance data shows you the cost. Put dollar signs on the bottlenecks.
Build targeted AI interventions. Not a platform-wide AI transformation. Surgical automation at the two or three points where the data proved the waste exists.
Mine again. After the interventions are live, run the same analysis. Did the escalation rate drop? Did the reopen rate improve? Process mining becomes your measurement system, not just your diagnostic.
The Starting Point Is Simpler Than You Think
If you have a ticket system backed by a SQL database — and most companies do — you can start this week. The open-source tooling (PM4Py, a Python library) is free. The SQL query to extract the event log is straightforward: pull the ticket ID, the action, and the timestamp. Format three columns. Run the algorithm. You'll have a process map before lunch.
The hard part isn't the technology. The hard part is looking at the map and accepting that your process doesn't work the way you thought it did. Every company I've seen go through this has the same reaction: "I knew we had escalation problems, but I didn't know it was that bad."
The data was always there. The ticket system was always recording it. You just have to decide to look.
A story. An insight. A bite-sized way to help.
Get every article directly in your inbox every other day.
I won't send you spam. And I won't sell your name. Unsubscribe at any time.
About the Author
Chris Lema has spent twenty-five years in tech leadership, product development, and coaching. He builds AI-powered tools that help experts package what they know, build authority, and create programs people pay for. He writes about AI, leadership, and motivation.