You Gave Every Manager an Einstein. They Asked Him to Do Laundry.
The most powerful cognitive tool in history arrived. The decision about what to do with it was left to the people with the least incentive to use it well.
Imagine you could give every household in the country a personal Einstein. Not a poster of Einstein. Not a book by Einstein. The actual man — general relativity, quantum mechanics, the whole package — sitting at your kitchen table, ready to work.
What would happen?
I'll tell you exactly what would happen. Day one: fascination. Day two: deep conversation about the nature of the universe. Day three: you'd ask him to fix the Wi-Fi.
By the end of the week, the most brilliant mind in human history would be doing your laundry.
This is not a moral failure. It's a psychological inevitability. Human beings don't use new resources to explore new possibilities. They use new resources to eliminate existing annoyances. Always have. Always will. Loss aversion beats curiosity every single time.
Now. Scale this up. Move it from the household to the corporation. Make it worse.
The Manager's Incentive
Here is the structural problem nobody's talking about.
In every company on earth right now, the decision about how to use AI sits with managers. Not with the people doing the work. Not with the people who understand the craft. Not with the people who might use it to discover something new. With managers.
And managers have one perfectly rational incentive: make their own jobs easier.
Not better. Not more innovative. Easier.
So what does a manager do when handed the most powerful cognitive tool in human history? The same thing you'd do with Einstein at your kitchen table. They use it to eliminate the annoyance directly beneath them — which, in a corporate context, is people.
The first instinct is never "what new thing could we create?" The first instinct is "whose job can this replace?"
This isn't malice. It's structure. The manager who eliminates three headcount looks like a hero on the quarterly review. The manager who says "I spent six months exploring what AI could create that we've never imagined" looks like someone who wasted six months.
The incentive system doesn't reward discovery. It rewards elimination. So elimination is what you get.
Jeeves and the Artificial Intelligence
P.G. Wodehouse understood this dynamic perfectly — a century before anyone coined the phrase "digital transformation."
Jeeves is, by any reasonable measure, the most intelligent man in London. Strategic genius. Encyclopaedic knowledge. A mind that could have run the Foreign Office or restructured the Bank of England before lunch.
What does Bertie Wooster use him for?
Ironing shirts. Mixing drinks. And — when things get truly desperate — extracting him from the clutches of terrifying aunts.
Jeeves is a butler first. A mixologist second. A genius third. Always has been. Even though he's the smart one.
This is your AI deployment. Jeeves could revolutionise the household. Restructure the Wooster finances. Identify opportunities Bertie can't even conceptualise. But nobody asks him to, because the shirts need pressing and there's a more immediate crisis involving an aunt.
And so the most powerful intelligence in the room spends its days on laundry.
The Five-Year Plan
Let me tell you something about the USSR that most people get wrong.
The Soviet Union didn't fail because it lacked technology. The Soviets put the first satellite in orbit. The first human in space. They built nuclear weapons, intercontinental ballistic missiles, some of the most advanced metallurgy and aerospace engineering the world had ever seen. Their mathematicians were world-class. Their physicists were extraordinary.
The technology wasn't the problem. The allocation of the technology was the problem.
Every technological decision was routed through central planners whose primary metric was output efficiency. Produce more steel with fewer workers. More grain with fewer tractors. More missiles with fewer factories. The goal was always optimisation — doing the existing thing more cheaply.
And here's what never got prioritised: discovery.
The Soviet system couldn't invent the personal computer. Not because it lacked the technical capability — it had the engineers, the materials science, the mathematics. It couldn't invent the personal computer because no central planner would ever have approved it. The personal computer doesn't optimise an existing process. It creates an entirely new category of human activity. And new categories of human activity don't appear on a Five-Year Plan.
Read that again. Then look at your AI strategy.
Every corporate AI initiative I've seen in the last two years follows the same pattern. Manager identifies a cost centre. Manager applies AI to reduce or eliminate that cost. Manager presents savings to the board. Board applauds. Repeat.
At no point does anyone ask: "What could this technology create that has never existed before?"
The entire corporate AI playbook is a cost optimisation exercise. And cost optimisation is precisely the philosophy that collapsed the Soviet Union — not the political philosophy, but the engineering doctrine: the belief that the purpose of technology is to do existing things more cheaply.
When your only question is "how do we reduce cost?" you will never discover anything. You will trim and trim and trim until one day you look up and realise the company down the road — the one that was asking "what could we build?" — has left you behind so completely that no amount of trimming can close the gap.
The Soviet Union didn't lose the Cold War to a more efficient country. It lost to a country that invented things the Soviet system couldn't even imagine wanting.
That's what capitalism looks like when it works. Not the cost-cutting part. Not the headcount reduction. The discovery part. The part where someone invents something nobody planned for, and it changes everything.
And right now, at the exact moment when the most powerful discovery tool in history has arrived, the corporate world has looked at it and built a Five-Year Plan.
The Onion
Here's the part that should genuinely alarm you.
The manager who uses AI to eliminate the team below them has just automated away their own justification for existing.
Think about it. If the team's work can be done by AI, and the manager's job was to manage the team — what exactly is the manager for now?
I watched this happen recently. A mid-level director at a legal services company — let's call him David — led a team of eight content specialists. David championed an AI initiative. Presentations were made. The board was impressed. Within six months, the team was down to two people and a collection of prompts. David was promoted. He gave a talk at an internal summit about "leading through transformation."
Three months later, upper management noticed something. David's two remaining reports were mostly managing AI workflows. The VP's question was quiet and devastating: "If the AI is doing the work, and two people are managing the AI… what is David for?"
And so the peeling begins.
Layer one uses AI to eliminate the workers. Layer two looks down and sees a manager with no team, managing an AI that doesn't need managing. Layer two eliminates layer one. Layer three looks down at layer two and sees the same picture. Layer three eliminates layer two.
Peel. Peel. Peel.
Each layer automates the one below it and, in doing so, presents its own throat to the layer above. It's an organisational death spiral dressed up as innovation. And at no point during the peeling does anyone stop and ask: "Wait — are we building anything? Or are we just consuming ourselves?"
This is what happens when the only metric is cost reduction. The system eats itself. Each layer is rational. Each decision makes sense in isolation. And the cumulative result is an organisation that has optimised itself into a very efficient nothing.
Some companies will survive this. Some won't. And the ones that don't won't fail because they lacked the most advanced technology ever created. They'll fail because they had the most advanced technology ever created and used it to cannibalise themselves.
That, by any honest definition, is a form of structural corporate corruption. Not the kind with brown envelopes and offshore accounts. The kind where every individual actor is behaving rationally within a system that is producing catastrophically irrational outcomes. The manager isn't evil. The manager is responding to incentives. The incentives are the problem.
Getting Out of the Bureau
Here's how I think about the next five years.
Every company has to go through the cost optimisation phase. You have to. AI will reduce costs, and if you don't capture those savings, your competitors will, and you'll be funding their advantage. So yes — do the efficiency work. Cut what needs cutting.
But here's where the race is won or lost: how long you stay in that mode.
Because cost optimisation is seductive. It feels productive. Costs are dropping. The board is happy. Every quarter looks better than the last. You could run this playbook for years and every dashboard in your organisation would glow green.
But you're not building. You're just becoming a more efficient version of something that already exists.
Think of the Soviet bureau again. The planners in Moscow weren't idiots. Every report they filed showed improvement. More output per unit of input. Targets met. Quotas exceeded. And they never once looked up to notice that a teenager in a garage in California was building something that would make the entire concept of a Five-Year Plan irrelevant.
It's already happening. While most law firms are using AI to automate document review — classic cost optimisation — a handful are using it to analyse patterns across thousands of cases and predict judicial outcomes before filing. They're not doing the old thing cheaper. They're offering a service that couldn't exist before the technology arrived. Their competitors are still celebrating how much they've cut from the paralegal budget.
The winners in the next decade won't be the companies that were most efficient at cutting. They'll be the ones who moved through the efficiency phase fastest and started asking what Einstein could do if you let him think.
And the companies that never make that shift? They'll look up one day, dashboards gleaming, costs optimised to perfection, and wonder why the market left without them.
The Question
I don't have a neat solution for this. The answer has to come from inside each organisation, because every industry, every company, every competitive situation is different.
But I have a question. And if you're running a company right now — or advising one, or investing in one — it's the only question that matters.
Look at your AI budget. Every initiative. Every project. Every dollar.
How much of it is pointed at cost optimisation?
And how much of it is pointed at discovery?
If the answer is "almost all of it is cost optimisation" — and for most companies, it is — then you need to understand what you're actually running.
You're not running a transformation.
You're running a Five-Year Plan.
And somewhere, in a company you haven't heard of yet, someone just asked Einstein a proper question. He's stopped doing the laundry. He's thinking.
You'll know the name of that company soon enough. The question is whether it's yours.