Designing for Oops
Why human error isn't a flaw, it's a signal
We tend to treat mistakes s personal failures, lapses in discipline, focus, or intelligence. But anyone who has ever sent a text message to the wrong person, walked into a room and forgottenwhy, or turned a key the wrong direction knows: human error isn’t an exception. It’s the rule.
The real issue isn’t that humans make mistakes. The issue is that most of our systems pretend we don’t.
If we want safer healthcare and hospitals, friendlier devices, and less chaos in daily life, we need to understand why errors happen and how smart design can keep them from spiraling into disasters.
Here’s a simple framework for thinking about human error, inspired by Don Norman’s book The Design of Everyday Things, and why the healthcare system desperately needs to pay attention.
Why Error Happens: The Human Brain Isn’t a Machine
People forget. They get distracted. They rely on habits. They make assumptions. This isn’t a moral failing, it’s cognitive reality.
Most environments, however, are built as if humans are flawless executors: “Just pay attention!”, “Just remember!”, “Just double-check!”
But “just” is doing a lot of heavy lifting there. Any system that depends on perfect memory, perfect attention, or perfect calm is already flawed. Human error isn’t random; it’s predictable. And if it’s predictable, it can be designed for.
Slips vs. Mistakes: Two Types of Human Error
Understanding the difference between slips and mistakes matters, because each requires a different solution.
Slips: Right intention, wrong execution.
You meant to turn the lock clickwise, but went the other way. You meant to grab your glasses, but picked up your sunglasses. You meant to click “Save”, but hit “Delete”.
Slips are errors of attention and action. They happen when the environment doesn’t provide enough feedback or clarity.
Mistakes: Wrong intention from the start.
You thought the meeting was at 2 p.m., but it was at 1. You assumed a button did one thing, but it did another.
Mistakes are errors in mental models; the underlying understanding of how something works.
Slips need better design. Mistakes need better understanding.
Social and Institutional Pressures
Even when we notice an error, we often stay quiet. Why? Because errors carry social cost. People fear embarrassment, discipline, or reputational damage.
Workers hide mistakes so they don’t look incompetent.
Professionals worry reporting errors will end careers.
Institutions bury problems to avoid liability or scandal.
When error becomes something shameful, people stop talking about it. When they stop talking, the system loses the very information it needs to improve. Silence is the enemy of safety.
Reporting Error: When Admitting “oops” Becomes the Superpower
Some industries have learned this lesson. For example, aviation is a standout. In the USA, NASA (the National Aeronautics and Space Administration) created a voluntary, semi-anonymous reporting system that allows pilots to report their own mistakes without fear of punishment. Once the report is processed, NASA removes identifying details. The goal is learning, not blame.
This single design choice, treating error reports as valuable data, transformed flying into one of the safest activities humans do. Imagine that mindset everywhere else: errors aren’t confessions. They’re clues.
Detecting Error: Catching the Problem Before it Explodes
Toyota offers a masterclass in error detection. Their concept of Jidoka encourages any worker on the assembly line to pull the andon cord when something seems off. Production stops. The team gathers. They ask “Why?” again and again until the root cause emerges.
No shame. No hiding. No “just be more careful next time.”
It’s an institutional acknowledgement that errors should be caught early, ideally before the defective part moves any further.
Hospitals and healthcare systems, by contrast, often operate with the cultural equivalent of “don’t pull the cord unless you’re absolutely sure”. In a high-pressure environment, that hesitation is costly.
Designing for Error: Making the Wrong Thing Hard and the Right Thing Obvious
If reporting and detecting error are reactive, designing for error is proactive. This is the world of poka-yoke: error-proofing. The idea is to create systems that make mistakes difficult or impossible. You see it everywhere:
A microwave won’t start unless the door is closed.
A car will make a sound if you haven’t fastened a seatbelt.
A USB-C or a plug only fits one way.
These designs keep humans from needing to be perfect. They replace vigilance with structure. At home, tiny design tweaks, e.g. a dedicated hook or bowl by the door for keys, do more for reliability than “trying harder” ever will.
The Big Question: Why Not Medicine?
Healthcare is one of the most complex systems humans have built and also one of the least forgiving of mistakes. Yet the stakes couldn’t be higher.
The medical field faces every barrier discussed above: fear or lawsuits; fear of blame; institutional concerns about reputation; hierarchical cultures that discourage speaking up; environments that require superhuman vigilance after 11 hours of working in a shift, etc.
But if aviation can set up nonpunitive reporting systems, and manufacturing can empower workers to halt production, and consumer products can use poka-yoke to prevent predictable slips, why hasn’t medicine embraced these same principles? We already know how to build safer systems, so the real question is:
What would it take to finally apply these principles where they matter most: in the systems that care for human lives?

