Back to Blog

AI Doesn’t Have a Capability Problem. It Has an Incrementality Problem.

ByFinnlay Morcombe

This perspective is written for leaders responsible for AI impact across operations, marketing, and knowledge work. It reflects patterns we consistently see while working with large enterprises attempting to scale AI responsibly across real workflows.

AI’s Biggest Problem Isn’t Capability. It’s Incrementality.

Incrementality has always been a marketing discipline.

Long before AI, marketers learned (usually the hard way) that attribution was a comforting illusion. It explained exposure, not impact. When budgets tightened, the only question that survived was the counterfactual: what would have happened if we had not done this at all?

What’s changing now isn’t that incrementality is leaving marketing. It’s that AI is forcing the rest of the enterprise to confront the same reality.

AI Is Repeating Marketing’s Hardest Lesson

Across Fortune 500 companies, AI is being deployed at a ferocious pace. Copilots are embedded in operational workflows. Automations are accelerating reviews, analysis, and customer interactions. On paper, adoption looks strong.

What is far less clear is whether the business is actually better because of it.

AI performance is typically reported through usage minutes, adoption curves, or time-saved estimates. These metrics feel reassuring, but they don’t survive contact with real operations. Quality problems surface later as rework, escalations, audit findings, or customer complaints. At that point, it’s hard to backtrace whether AI improved the system—or whether it simply shifted cost and responsibility downstream.

Marketing has seen this before. AI is now repeating it at enterprise scale.

Incrementality Is the Only Measurement That Holds

Incrementality forces a harder question than most AI reporting ever does.

The issue is not whether people are using AI or whether things feel faster in the moment. The real question is whether work done with AI produces different outcomes than the same work done without it. That’s where most measurement collapses.

In practice, this means lining up AI-assisted work against a real baseline and measuring what changes in:

  • Throughput
  • Cycle time
  • Error rates
  • Downstream quality (rework, escalations, defects, compliance findings, customer outcomes)

It also means controlling what you can (demand, staffing, case mix) and accepting that some early “wins” disappear once you measure properly.

None of this is mathematically complex. It’s just uncomfortable.

The Real Bottleneck Is How Work Is Defined

Most enterprises struggle to measure AI incrementality for a basic reason: the work itself isn’t stable or predictable.

Workflows vary by team. Execution often relies on tacit judgment calls made at the whim of the executor. Changes frequently take place without any update to the shared understanding of how work is supposed to happen.

When processes are fluid:

  • there is no clean control group,
  • there is no reliable baseline,
  • and there is no agreed definition of “done right.”

This is why AI pilots can look successful at face value, but scaled rollouts feel ambiguous. The organization can’t determine what actually changed (or didn’t).

Don’t blame model capability. This is the real constraint on AI ROI today.

Why the Best Teams Are Slowing AI Down

The most advanced organizations do something counterintuitive: they slow deployment down.

Instead of broad rollouts followed by retrospective dashboards, they treat AI changes as experiments:

  1. Define the work explicitly
  2. Introduce AI in narrow slices
  3. Measure impact incrementally against a baseline
  4. Expand only when the delta stays positive

This mirrors how marketing matured into a performance discipline.

Why You Should Care

AI does not need more enthusiasm or sexier adoption metrics.

It demands the same rigor marketing earned the difficult way.

Incrementality isn’t being repurposed. It’s being rediscovered—because AI makes it unavoidable for every business function it touches. Organizations that internalize this principle scale AI with confidence. Organizations that mistake movement for progress miss the forest for the trees.

Hero Illustration

Find and measure AI use cases in your enterprise.

Fluency is the fastest way to get real-time insights into your operations.

Find and measure AI use cases in enterprise.