The Meeting After the Meeting Is a Bug, Not a Feature
Zoom Is Notepad. Nobody's Building Excel.
Last Tuesday, a project manager named Sarah read an AI-generated summary of a meeting she wasn't in. The summary said the team had "agreed to deliver the API integration by March 15." She created a Jira ticket. Assigned it. Set the deadline.
Nobody in that meeting agreed to March 15.
What actually happened was this: the tech lead said, "Yeah, I think we can probably do that" — while looking at his shoes, in a tone that everyone in the room understood to mean "absolutely not, but I don't want to have this fight right now."
The AI transcript captured the words. It missed the shoes. It missed the tone. It missed the three people who heard "probably" and understood it as "no." The summary flattened all of that into a confident action item. Sarah, working from a photocopy of a photocopy, turned an unresolved disagreement into a committed deadline.
Two days later, the tech lead saw the Jira ticket. Three people replied with conflicting context. Someone suggested "we should probably jump on a quick call to align on this."
Another meeting. To clarify the output of the first meeting.
The Corporate For Loop
This isn't a management failure. It's a tooling problem. And once you see the pattern, you'll see it everywhere.
A transcription tool captures every word. An AI summarises it — stripping context twice. A PM who wasn't in the room reads the summary, creates a ticket. The ticket generates an email chain from people working off third-hand context. The responses conflict.
Another meeting gets scheduled.
while (context.isLost()) {
scheduleMeeting();
}
There's no break condition. Every meeting generates a lossy transcript, which generates a lossier summary, which generates ambiguous action items, which generate another meeting. The loop runs until someone makes an executive decision out of exhaustion or the deadline passes and the question becomes moot.
Let me say that plainly. The current generation of AI meeting tools is not reducing meetings. It is increasing them — by systematically stripping context from conversations and forcing humans to reconvene to put it back.
We are holding meetings to recover from the side effects of the tools we use to record meetings.
What the Fix Actually Looks Like
The fix is obvious. Build a tool that captures decisions, not words. One that knows what kind of meeting it's in. One that knows the difference between "yeah, sure" spoken with conviction and "yeah, sure" spoken while looking at shoes.
The technology exists. Right now. Today.
But nobody's assembled it. So let me describe what it would be.
Every successful text-based business tool answered one question first: what kind of interaction is this? Then it imposed the appropriate structure. Salesforce doesn't give you a blank text field and say "good luck managing your pipeline." It knows what a deal is. It knows what a stage is. It knows what a follow-up looks like.
Zoom gives you a blank rectangle. A sales call and a post-mortem incident review get the same empty window.
That's the whole diagnosis.
Here's what happens when you fix it. Three meeting types. Every company runs all three every week.
The Sales Call. The system knows this is a sales call before it starts — it read the calendar invite, identified the external domain, checked the CRM record. The interface isn't a blank rectangle. It's showing the deal stage, the last three interactions, the open questions from the previous call. AI is listening — not for transcription, but for commitments. When the prospect says "we'd need to see a pilot by end of Q2," that's captured as a structured commitment, tagged to the deal, with a follow-up generated in real time. When the call ends, the CRM is already updated. Not summarised. Structured. No PM needed. No email chain. No for loop.
The Weekly Standup. The system pre-loads last week's action items. AI tracks each item against what people actually say. Did someone say the API integration is "almost done" for the third consecutive week? The system flags it — not as a transcript note, but as a pattern. Something that requires a decision, not another week of "almost done." When the standup ends, the board is updated. The standup did something instead of just happening.
The Customer Complaint. The system surfaces the customer's history, their tier, their last five tickets. AI is reading tone — not just words, but how the words are being said. When the customer says "this is the third time I've called about this," the system knows whether that's true — and if it is, it escalates before the call ends. The follow-up isn't a summary email. It's a structured incident with a severity level, an owner, and a deadline.
Every component I just described exists today. Tone analysis. Commitment detection. Calendar-to-CRM matching. Pattern recognition across recurring meetings. Real-time structured updates. The individual pieces are all here.
What's missing is the assembly.
Not a transcription tool with AI bolted on. A fundamentally new application where video is the input layer and structured business logic is the processing engine.
AI isn't the feature. AI is the runtime.
Why Nobody's Built It
Because the value is illegible.
Every technology purchase passes through a spreadsheet. Cost savings are legible — "we'll eliminate $400,000 in developer hours" fits in a cell. Innovation is illegible — "this will enable a category of interaction that doesn't currently exist" doesn't fit in a cell. There's no number. Nobody can track it next quarter.
By definition, you can't measure the output of something that doesn't exist yet.
The spreadsheet has a deeper problem. It demands a chess prediction from a poker situation. Chess is complete information — every piece visible, every move calculable. Technology adoption is poker. You don't know what second-order behaviours will emerge. Nobody who built Slack predicted it would kill internal email. Nobody who built the iPhone predicted it would destroy the compact camera industry.
If you'd asked for a spreadsheet predicting those outcomes before the tool was built, every one of them would have been rejected.
So transformative technologies get approved for incremental uses. Because incremental uses are the only ones that survive the approval process. The internet became a way to send memos without envelopes. The cloud became a way to reduce server room costs. And AI is becoming a way to write the same CRUD applications with fewer developers.
Everyone is pointing a future-facing technology backwards — at the architecture they inherited.
We are using AI to mass-produce Notepad. When we could be using it to invent Salesforce.
Who Builds This?
Not Zoom. Zoom comes from a communications background — they built a better phone call and they'll keep optimising the phone call. Not Salesforce. Salesforce comes from a database background — they'll keep adding video features to their CRM, which is the equivalent of putting a webcam on a spreadsheet.
The pattern from history is clear. Salesforce wasn't built by a database company — it was built by someone who left Oracle because he understood the relationship was the unit of value, not the record. Slack wasn't built by an email company — it was built by a games company that accidentally discovered their internal communication tool was more valuable than the game.
The transformative platform gets built by someone who has the problem, not someone who has the technology.
So who has this problem? Who sits in meetings watching context evaporate, watching decisions get lost, watching the for loop spin? Not video engineers. Not AI researchers. It's operations people. Solutions architects. The person who understands that a meeting isn't a conversation — it's a node in a business process, and the outputs need to feed structured systems downstream.
The Snake Charmer Problem
Here's the part that decides who wins.
A snake charmer isn't controlling the snake. The snake can't even hear the music. What the charmer understands is the snake's behaviour — how it responds to movement, to proximity, to rhythm. The audience thinks the music is doing the work. The charmer knows the music is irrelevant.
Every AI meeting tool being built right now is building a louder flute. Better transcription. Better summarisation. Better NLP. More music.
But the moat — the thing that makes one product win and the others become features — is reading the room. Understanding that "yeah, probably" means "no" when spoken by a British engineer and means "yes, enthusiastically" when spoken by an American sales rep. Recognising that "let's take this offline" means the decision has already been made and you weren't consulted. Seeing that people in meetings are not primarily exchanging information — they're managing status, avoiding conflict, performing competence, hedging against blame, and occasionally, almost accidentally, making decisions.
The hard problem isn't technical. It's the interpretation layer.
The founding team that cracks it will look unusual: a workflow architect who thinks in business processes, someone who understands human behaviour at least as well as they understand code, and an AI engineer. In that order of importance.
Sarah is still running that for loop. Another summary landed in her inbox this morning. Another meeting is being scheduled right now to clarify what the last meeting decided.
The tools captured every word. They just missed everything that mattered.
Build the tool that reads the room, not the transcript.