When “Normal” Becomes Dangerous: Three Lessons from a Simple, Fatal Mistake
- mikemason100
- Oct 13
- 5 min read

In March 2023, a Beech Baron 58P crashed just after takeoff from Lubbock Executive Airpark, Texas. The aircraft lost power in its left engine and rolled inverted before impacting the ground. The pilot was killed.
The investigation found something startlingly simple at the heart of the tragedy: the fuel selector for the left engine was in the OFF position. That meant the engine wasn’t getting any fuel. With one engine producing thrust and the other dead, the aircraft rolled and became uncontrollable.
Investigators also found the pilot’s mobile phone data. He’d been on a call for more than 12 minutes, a lot of that time spent concurrently conducting his pre-flight walkaround. The probable cause: “the pilot’s failure to ensure a proper fuel selector position before takeoff.”
This is of course true. At the same time, it’s also the kind of conclusion that stops learning dead in its tracks. “He should have paid more attention.” “He shouldn’t have been on his phone.” “He should have kept control.” All true. All unhelpful if we actually want to prevent the next accident.
The report lists no recommendations. It offers no discussion of systemic factors, no design insights, no organisational learning. Just a statement of human failure. Beneath this apparently simple mistake lie three far more important lessons, for aviation and for business and that is what this blog will talk about.
1. When “Normal” Becomes the Real Hazard
The pilot was seen walking around the aircraft, phone to his ear. The behaviour was unusual enough for a witness to note it, but not to intervene. That suggests it wasn’t shocking, it was normal enough and certainly acceptable behaviour.
When behaviour that should be exceptional quietly becomes accepted, we’ve entered the territory of normalisation of deviance. Over time, as shortcuts and distractions don’t immediately lead to disaster, they become routine. “I’ve done it before and nothing went wrong” becomes its own kind of logic.
In aviation, that logic can deadly and in business, it can still have serious consequences. We skip the review step because “we’re behind schedule.” We run a meeting without notes because “everyone knows what to do.” We make critical decisions half-distracted because “that’s just how busy things are.” Each deviation feels harmless until one day it isn’t. A big thing to realise is that normalisation of deviance is just that, normal. Recognising it and doing something about it is what is important.
Leaders need to ask: What does normal look like here? Has efficiency or familiarity allowed risky shortcuts to creep in? If the norm includes distraction, fatigue, or unchecked assumptions, then we’re already setting ourselves up for failure. Culture isn’t defined by what’s written in manuals. It’s defined by what people do when no one’s watching and what others, or even ourselves, quietly accept.
2. Humans Are Human. Design for It
It’s easy to say, “The pilot was distracted.” It’s harder, and more valuable, to ask, “How could we design this system so distraction doesn’t have catastrophic consequences?” We are all prone to distraction. Phones ring, conversations happen, brains wander. Telling people to “pay more attention” is as ineffective as telling them not to blink.
In this case, there was no mechanical safeguard to prevent taking off with a fuel selector off. There was no interlock, no alert, no secondary verification process. A single human oversight cascaded into an unrecoverable emergency. That’s a design problem, not just a human one.
The same principle applies in business and leadership. If success depends on everyone being 100 percent attentive, 100 percent of the time, your system is already fragile. Humans get tired, interrupted, and distracted. Robust systems assume that and build checks, redundancies, and team behaviours to catch what individuals miss.
In aviation, that might mean tactile detents or warning systems for fuel selectors. In business, it might mean structured peer review, automated verification, or deliberate pauses before go-live decisions.
Don’t design for perfect people. Design for real ones.
3. The Language of Blame Prevents Learning
At the end of the report, the NTSB includes its standard disclaimer:
“The NTSB does not assign fault or blame for an accident or incident…”
Earlier in the report, it states:
“The probable cause was the pilot’s failure to ensure a proper fuel selector position before takeoff.”
This contradiction speaks volumes. Saying you don’t assign blame, and then assigning it anyway, sends a message about how we still often talk about failure.
It’s the same in many organisations. A project collapses and the post-mortem reads, “The team leader failed to manage the risk.” A data breach occurs and the report says, “The employee failed to follow policy.” In both cases, we might be factually correct, but we’ve stopped learning the moment we’ve found someone to fault.
Blame satisfies the need for closure. It feels neat and complete. It also shuts down curiosity which is the very thing required for improvement. Once we say “human error,” we stop asking, why was this error so easy to make, and so hard to catch?
If we want real learning, we need better language. Instead of “failure to,” try:
“The system allowed…”
“The process lacked…”
“Conditions made it likely that…”
Words mean worlds. And the way we use them instantly shapes our judgement and whether we punish or prevent.
Connecting the Dots: The Business Parallels
This accident reads like a parable for organisational life.
Normalisation of risk: “We’ve always done it this way.”
Fragile systems: Success depends on individuals never slipping.
Blame over learning: Reports that name people, not conditions.
Each element can be found in boardrooms, hospitals, and project teams as easily as on an airfield.
So how do we change that?
Make normal visible. Regularly audit routines and processes, not just outcomes. Ask where people have adapted procedures and why. If adaptations make sense, incorporate them. If they create hidden risk, fix them.
Engineer resilience. Assume distraction will happen. Build in cross-checks, redundancy, and automation that catches human lapses early.
Change your post-mortem language. When things go wrong, start from curiosity, not culpability. Ask “what made this possible?” instead of “who did this?”
Whether it’s a cockpit or a company, failure rarely comes from a single moment of inattention. It comes from systems that allow small lapses to pass unnoticed until they matter most.
Final Thoughts
The tragedy near Lubbock wasn’t about a phone call. It was about a system that depended on a single perfect moment of attention to keep an aircraft safe.
When we treat human performance as flawless, we guarantee future failure. When we study why normal makes sense, and when we design for imperfection, we move from blame to learning, and from luck to resilience.
In aviation, that mindset saves lives. In business, it can save organisations.
-----------------------------------------------------------------------------------------------------------------------------------------------------

Mike Mason and Sam Gladman are the co-founders of On Target, a leadership and team development company that brings elite fighter pilot expertise into the corporate world. With decades of combined experience in high-performance aviation, they specialise in translating critical skills such as communication, decision-making, and teamwork into practical tools for business. Through immersive training and cutting-edge simulation, Mike and Sam help teams build trust, improve performance, and thrive under pressure—just like the best flight crews in the world.
If you'd like to learn more about how On Target can help your team, contact Mike and Sam at info@ontargetteaming.com



Comments