Shipping Faster Is Not a Strategy. It's What You Do When You Don't Have One.

The efficiency model captured AI before the innovators got a turn.

Nobody asked what to build. That's the thing. AI arrived, and the default corporate instinct was to ask "how do we do the same things faster?" Not "what should we be building that we aren't?" Not "do we understand our customers any differently than we did before?"

Those are the two questions that actually matter. And in most companies, nobody is asking them. Not because they're stupid. Because the system isn't designed to ask them. The system is designed to make existing things cheaper, and AI just made that system very, very good at its job.

The Drucker Line Nobody Reads Properly

Peter Drucker said something in 1954 that most corporations have spent seventy years treating as a motivational poster: "A business has only two basic functions — marketing and innovation. Everything else is cost."

Two functions. Not one. Not "innovation, plus whatever the growth team is doing." Marketing and innovation. The creation of new value and the understanding of who needs it. Drucker saw them as inseparable, two sides of the same act of discovery.

Not revenue operations. Not sprint planning. Not the quarterly business review where fourteen people spend ninety minutes agreeing with whatever the most senior person in the room said first.

Cost.

Now. We have the most powerful discovery tool in human history sitting on every desk in every corporation on the planet. A tool that could transform both how we build and how we understand who we're building for. And the question that decides who wins and who disappears over the next decade is breathtakingly simple.

Who gets to point it?

The Productivity Heist

Here's what actually happens when you give a development team AI.

A developer who needed two weeks to build a feature can now do it in three days. That's real. That's measurable. That shows up on a dashboard beautifully.

Now here's what happens when you give a marketing team AI.

The copywriting and design that used to take two weeks can now be drafted in an afternoon. More content. More variations. More A/B tests. More ads in more channels in less time than ever before.

Both sound like progress. Ask yourself: who captures the value?

Not the developer. The developer doesn't get to spend the remaining time exploring new product ideas, prototyping something speculative, or thinking about what the product should be instead of what it is. Of course not. That would be innovation. Innovation is messy. Innovation doesn't have a Jira ticket.

Not the marketer. The marketer doesn't get to spend the saved time asking why customers actually leave, or whether the positioning is wrong, or what segment nobody's serving. That would be discovery. Discovery doesn't have a content calendar.

What happens instead is this. The manager, who is structurally a consumer of other people's output, now gets to demand overnight turnarounds on features that used to require a sprint and campaign variations that used to require a week. They can request three design options instead of one. Three email drafts instead of one. They can say "just redo it" without political consequence, because the cost of rework has collapsed.

The developer works just as hard. The marketer works just as hard. Possibly harder.

But the manager's forecasts become more accurate. Their timelines shrink. Their status reports look increasingly like the work of a strategic genius. The productivity gain flows upward, silently and predictably, to the person who commissioned the work. Not the person who did it.

AI makes the developer faster. AI makes the marketer faster. The speed gets consumed by the person above. Output goes up. Leverage stays exactly where it was.

This isn't a bug. This is the system working precisely as designed.

The Gravitational Problem

Now ask the obvious question. Who decides where AI gets deployed in a corporation?

Not who should decide. Who does.

In most companies, that decision sits with the people running what you might call the mechanistic efficiency model. A system designed to measure, predict, control, and optimise. Reduce variance. Hit the forecast. Shrink the timeline. Eliminate surprise. Not every manager operates this way. Some understand that the most valuable output of a quarter might be a question nobody thought to ask before. But those people rarely control the AI budget.

The efficiency model controls the AI budget.

And it does exactly what you'd expect with it. It uses AI to ship features faster and produce campaigns faster. AI-as-execution-tool makes the efficiency machine measurably, demonstrably, presentably better at being an efficiency machine.

But AI-for-innovation? Using it to explore what to build next? That makes the efficiency machine uncomfortable.

And AI-for-real-marketing? Using it to discover unserved needs, to understand customers in ways a survey never could, to question the entire go-to-market thesis? That makes the efficiency machine terrified.

Think about why. Discovery, in product and in market alike, produces ambiguous results. It generates ten options where you used to have one. It demands judgment, real judgment, the kind you can't defend with a methodology or present as a confidence interval. Discovery is, by its nature, hostile to the kind of predictability that the mechanistic model was built to produce.

So the system develops a gravitational pull. Not a conspiracy. That would be easier to fix. An incentive structure. The efficiency model captures AI for execution, and the discovery model gets quietly, persistently starved. In both innovation and marketing.

Nobody decides this. Nobody signs a memo. It just happens. The way water finds the drain.

while (model === "efficiency") {
    ai.deploy("execution");
    features.shipped++;
    campaigns.produced++;
    discovery--;
}

There's no break condition. Not in the current architecture.

The McNamara Machine

Robert McNamara ran the Vietnam War on metrics. Body counts, sortie rates, territory percentages. Every number looked excellent. The dashboards — had they existed — would have been beautiful.

The war was a catastrophe.

Because the things that mattered couldn't be measured, and the things that could be measured didn't matter. McNamara had built a system that was extraordinarily good at optimising the wrong objective. And the system felt like it was working, because every metric confirmed it.

AI in the modern corporation is building McNamara machines at extraordinary speed.

Sprint velocity is up. Cycle times are down. Deployment frequency has never been higher. Content output is through the roof. The marketing team is producing more campaigns per quarter than ever before. The engineering team is shipping like a well-oiled factory.

And almost nobody is asking whether they're building the right product. Or talking to the right customer. Or solving the right problem.

The corporation becomes extraordinarily efficient at doing the wrong things. And it feels like progress. That's the dangerous part. The metrics confirm it. The dashboards celebrate it. The quarterly review applauds it. The only people who notice are the ones closest to the customer and the product, marketing and innovation, the very people whose voices are structurally subordinate to the people holding the spreadsheets.

Read that again. Then look at your last board deck. Count how many slides are about what you shipped versus how many are about what you discovered. Count how many are about campaigns produced versus customers understood.

If the ratio makes you uncomfortable, it should.

The Drucker Test

So here's a test. For the CEO.

Look at where your AI investment is actually going. Not the press releases. Not the pilot programme with the impressive name. The daily reality. Where the tool is running, who's using it, and whose objectives it's optimising.

Because that last part is the question nobody asks. AI can sit inside your engineering team and still serve the efficiency model. AI can sit inside your marketing team and still serve the efficiency model. If the purpose of the deployment is to make forecasts tighter, timelines shorter, and dashboards greener, it doesn't matter which department it lives in. It's optimising for predictability. Not discovery.

Is AI making your innovators more creative, or your efficiency model more comfortable?

Is it helping you create new value, or just deliver existing value faster?

Is it generating new questions — or just faster answers to the old ones?

The companies that win the next decade won't be the ones that deployed the most AI. They'll be the ones that deployed it for the right purpose. Not to optimise what Drucker called "cost." To power the only two functions he said actually matter.

The companies that get this wrong will look up one day: dashboards gleaming, sprints predictable, content calendars full, costs optimised to perfection. And they'll wonder why the market left without them. Because somewhere, in a company they haven't heard of yet, someone pointed AI at the other two functions. The ones that matter after the hype cycle ends and the market starts keeping score.

If your AI strategy's biggest achievement is making the same things faster, you don't have an AI strategy. You have an automation budget with a fancy name.