Walmart : try { self-checkout } catch { arrest the user }

Why blaming users is the laziest form of engineering — and the most profitable.

$48 of Christmas Lights, Bread, and Cereal

Lesleigh Nurse went to Walmart with her husband and three children in November 2016. Her self-checkout scanner malfunctioned and froze. A Walmart associate came over and helped. Her husband paid $122 for groceries.

As the family left, an asset protection manager stopped her. Accused her of not paying for eleven items worth $48.

She was arrested. Mugshot taken. Four hours in county lockup. Her children were berated by Walmart employees. When they went back to school, the other children knew.

The other children always know.

Criminal charges were eventually dismissed. No one from Walmart showed up to court. But the demand letters kept arriving — a Florida law firm wanting $200 to make it “go away.”

At trial, the judge criticised Walmart for intentionally destroying security camera footage that could have exonerated her. A jury awarded Nurse $2.1 million in punitive damages.

Walmart filed a motion calling the damages excessive.

The Catch Block That Called the Police

Nurse was not an anomaly. She was a metric.

Over a two-year period, expert testimony revealed that Walmart charged approximately 1.4 million people with criminal theft through its self-checkout systems. Collected more than $300 million through civil demand letters. A “Corrective Education Company” deployed in 2,000 stores offered people a choice: pay $400 for a mandatory online course, or face prosecution. Sign-up rate: ninety per cent.

Let me say that plainly. The self-checkout system wasn’t broken. It was working exactly as designed. The $300 million in demand letters wasn’t a side effect of a flawed system — it was a revenue line. The catch block didn’t just blame the user. It invoiced the user.

Now let me explain how it worked in terms any developer will understand instantly.

Every engineer knows what a try-catch block is. You wrap risky operations in a try. When something fails — and something will fail — execution falls into the catch. And every engineer knows, without being taught, without debating it, whose responsibility that catch block is.

It’s yours. The engineer’s. The system’s.

When your code throws an exception, you don’t phone the user and say “why did you make my code fail?” You don’t file a police report against the caller. You handle the failure gracefully. You log it, recover, offer a fallback. That’s literally what catch blocks exist for — because systems fail and the system is responsible for handling its own failures.

Here’s what Walmart did. They took the same barcodes, the same packaging that’s notoriously difficult to scan, the same checkout flow — and replaced the trained cashier who does this eight hours a day with a tired parent, three children in the cart, trying to get through Saturday errands.

When the system inevitably produced errors — dirty scanner, package that wouldn’t read, item the customer thought had registered but hadn’t — these were, in any reasonable engineering framework, system exceptions. Predictable, reproducible failure modes that should have fallen into a catch block the system owned.

Instead, Walmart passed the catch to the user. Not just passed it — criminalised it. Every unscanned item became evidence of criminal intent. The burden of the system’s failure was exported entirely to the person who had no power to fix the system, no access to the code, no ability to debug a dirty scanner or a package the barcode reader couldn’t parse.

This is the equivalent of writing a catch block that sues the caller.

But Most of Us Aren’t Walmart

Here’s where I want to be honest, because this piece isn’t really about Walmart. Walmart wrote a catch block that called the police, and we all agree that’s monstrous. But the architecture of what they did isn’t monstrous. It’s mundane. It’s the same thing we do every time we throw an error message at a user for something our system should have handled.

The only difference is that Walmart had the audacity to do it with handcuffs instead of a red validation message.

Once you see the pattern, you can classify every catch block you’ve ever written into one of three categories.

The Malicious Catch. This is Walmart. The system fails, and the catch is designed to extract value from the user’s confusion. The demand letters, the “corrective education” fees, the civil recovery programmes — these aren’t engineering failures. They’re business models built on top of engineering failures. Most of us will never write one of these. But we should be able to recognise them, because they always wear the same disguise: a system that treats ambiguity as guilt.

The Lazy Catch. This is most of us, most of the time.

Your API returns 500 Internal Server Error with an empty body. The frontend catches it and displays “Something went wrong, please try again.” The user tries again. And again. And again. Seventeen times, with increasing desperation. The error is on your side. Trying again will never fix it. But the user doesn’t know that, because your catch block told them to try again instead of telling them the truth — which is that you failed and there’s nothing they can do about it.

That’s the lazy catch in its purest form. The system broke. The catch block pretended the user could fix it.

But the lazy catch has subtler forms too. The phone number field that rejects (555) 123-4567 because it expects 5551234567 — three lines of regex could parse any reasonable format, but nobody wrote them. The form that vaporises twelve minutes of work on a single validation failure — caching was a solved problem before most of us were born, but nobody implemented it. The password field that blocks paste — actively sabotaging the single most secure method of entering credentials, because somewhere a product manager thought paste-blocking prevents… something.

None of these are malicious. They’re lazy. But here’s the thing about lazy catch blocks at scale: they produce the same aggregate harm as malicious ones. The harm is just distributed so thinly across so many users that nobody notices.

That is the genius of the lazy catch. The cruelty is spread so thin that no single person bears enough of it to revolt.

CAPTCHAs make you prove to a machine that you’re not a machine, by performing a task machines are now better at — and your unpaid labour trains Google’s self-driving car AI. Every modern login flow is like being handed a coconut when all you wanted was a banana. Every time. You peel the husk, crack the shell, dig through the fibre — just to get to the thing you came for. Each individual encounter feels too trivial to complain about. That’s what makes it so durable.

The Legitimate Catch. The user typed their own email address wrong. No system on earth can read their mind. This is a genuine user error, and a clear error message is the right response.

These exist. But they are far rarer than we pretend. Most of what we file under “user error” is actually “system error we couldn’t be bothered to handle.” The honest question every engineer should ask when writing an error message is: could my system have handled this? If a human being could look at the input and figure out what was intended, your system could too. You just chose not to.

Why Lazy Catches Survive

This is the part that actually matters, and it’s not because engineers are bad at their jobs.

The cost of a lazy catch block is borne entirely by the user. The cost of fixing it is borne entirely by the engineer. And there is no feedback loop connecting the two.

Think about that for a moment. In what other engineering discipline do we tolerate a complete disconnect between the person who experiences the failure and the person responsible for fixing it?

If a bridge sways, the structural engineer hears about it. If a car brakes poorly, the manufacturer’s warranty division hears about it. But when your phone number field wastes forty seconds of someone’s life, nobody hears about it. The user sighs, reformats their input, and moves on. There is no Jira ticket. There is no incident report. There is no feedback signal of any kind.

The error is invisible to the system that produced it.

This is why lazy catches survive code reviews. This is why they survive QA. This is why they survive in production for years. Because by every measurement the team takes, validation is working. Malformed input is being caught. Error rates are within acceptable bounds. The dashboard is green.

Sound familiar? Walmart’s dashboards said shrinkage was down and the demand letters were generating revenue. By its own metrics, everything was working. Your dashboards say validation is catching bad input. By your metrics, everything is working too.

Same blindness. Same cause. You’re measuring system performance from the system’s perspective, never from the user’s.

And here’s why this isn’t just a philosophical problem — it’s a structural one. When your catch block’s foundational assumption is “the user got it wrong,” there is only one direction your system can evolve. More enforcement of the same frame. You can’t iterate toward a gentler approach. You can’t A/B test a different assumption. The catch block has defined the user as the source of the exception, and the only tool left is escalation.

Walmart’s version of escalation: after losing a landmark jury trial, after a court called their programme textbook extortion, they spent $500 million on more surveillance, more AI, more cameras, more controlled exit lanes that look like airport security checkpoints. In some stores they reserved self-checkout for paid subscribers — a paid tier for the machines that replaced free cashiers. Each layer of enforcement added cost, added friction, added resentment — which produced more of the behaviour they were trying to suppress, which demanded more enforcement. A feedback loop with no stable state.

Your version of escalation: after enough users complain about the login flow, you add a CAPTCHA. After the CAPTCHA frustrates people, you add a rate limiter. After the rate limiter locks out legitimate users, you add an account recovery flow that takes three days and requires a photo of a government ID. Same spiral. Same architecture. Smaller stakes, identical structure.

In a handful of stores, Walmart removed self-checkout entirely. Local police reported that theft-related calls dropped dramatically.

The try block was the problem all along. But the frame wouldn’t let them see it.

Own Your Catch Block

Every exception your system throws has two possible owners: you, or the person standing in front of the screen. And that choice — made a thousand times a day, in a thousand unremarkable lines of code — is a design decision with consequences that compound.

When a password policy produces Post-it notes on monitors, the policy failed.
When a self-checkout produces “theft,” the checkout failed.
When a 500 produces “try again,” the catch block lied.

The system failed. Not the user.

Think about Lesleigh Nurse’s children watching their mother get arrested because a barcode scanner misfired and a system decided that ambiguity equals guilt. Somewhere in a corporate office, the person who designed that system presumably went home and slept soundly, because the dashboard said everything was working.

And the dashboard wasn’t wrong. Everything was working. The arrests were working. The demand letters were working. The revenue was working.

The only thing that wasn’t working was the thing nobody measured: what the system was doing to the people standing in front of it.

Your system probably isn’t arresting anyone. But ask yourself — honestly — whether you’ve ever measured what your catch blocks are doing to the people who trigger them. Whether you even could measure it, with the dashboards you have.

If you can’t, then you and Walmart have more in common than you’d like to admit. Not in cruelty. Not in intent. But in architecture. You’re both measuring the system from inside the system, and calling it green.

The best systems are not the most sophisticated. They are the ones that assume the person in front of them is trying to do the right thing — because statistically, overwhelmingly, they are — and that when something goes wrong, the first place to look is the code, not the caller.

That’s the job. It has always been the job.