We've worked through enough AI implementations to see the pattern clearly now. The ones that deliver — the ones that change something measurable in the business — share a set of characteristics that have nothing to do with which AI model you use or how much you spend. The ones that don't share a different set.
This isn't a piece about technology. It's a piece about how businesses make decisions about technology — and why that decision-making process is almost always the thing that determines whether an AI investment returns anything at all.
They start with the outcome, not the tool
Every AI implementation that has delivered in our experience started with a specific, measurable outcome the business wanted to achieve. Not "we want to use AI" or "we want to be more efficient" — but something like: we need to cut quote turnaround from three days to four hours, or we need to stop our team spending 12 hours a week on manual data reconciliation.
"If you can't describe the outcome in a sentence and measure it in a number, you're not ready to implement. You're still shopping."
This sounds obvious. But the majority of businesses we speak to have evaluated AI tools before they've defined the job the tool needs to do. That order of operations almost always ends in expensive disappointment.
The test: Before evaluating any AI tool, write down the specific outcome you expect it to deliver and how you'll measure whether it has. If you can't do that in two sentences, start there instead.
The technology fits the actual work
The best AI implementations we've seen involve tools that fit naturally into how people already work — not tools that require people to change how they work so the tool can function. This sounds like a small distinction. It isn't.
Adoption is the single most underestimated factor in whether an AI implementation delivers. Technology that works in isolation but requires the team to change their workflow will be worked around. Technology that plugs into existing processes and makes them faster will be used.
- The tool connects to systems the team already uses daily
- The output fits into an existing workflow — it doesn't create a new one
- The improvement is felt immediately by the people using it
They scope it smaller than feels right
The instinct when implementing AI is to go broad. You can see all the potential applications at once, and it's tempting to address all of them in one go. The implementations that deliver resist this instinct. They pick one thing, implement it properly, measure it, and then move to the next.
Broad implementations fail because they're hard to measure, hard to manage, and hard to course-correct. A narrow implementation that works creates momentum — and a business case that makes the next one easier to justify and easier to execute.
They measure commercial impact, not activity
AI implementations get measured in one of two ways: by activity (how many things the tool processed, how many hours it theoretically saved) or by commercial impact (what changed in the business as a result). The implementations that survive budget scrutiny and get expanded are the ones measured the second way.
The distinction: "The agent processed 400 invoices this month" is an activity metric. "We caught $12,000 in billing errors we would have paid" is a commercial metric. The first is interesting. The second is why you keep paying for the tool.
They have someone who knows which tools to use
The AI landscape is moving too fast for any business owner to track alongside running their business. New tools emerge weekly, existing tools change significantly, and the gap between what's marketed and what's actually useful is wide. The businesses that consistently get value from AI have someone — internal or external — who does this work for them.
That's not a pitch. It's an observation. Businesses that try to evaluate AI tools in isolation, without access to someone who has already worked through the options, spend a lot of time and money finding out what the options are. Businesses that have that shortcut skip that phase entirely.
Where to start
The common thread in everything above is clarity — on the outcome you're after, the process you're improving, and the number that tells you whether it worked. That clarity is the foundation everything else is built on, and it's what we work to establish before anything gets built.
If you're not sure where to start, or you have a sense of where AI could help but no clear path forward — that's exactly what a strategy call is for. It takes 30 minutes, and you'll leave with a clear picture of what's worth pursuing first.