top of page

Were We Lucky, or Were We Good? What an F-35 Crash Reveals About Decision-Making When the Playbook Runs Out

  • Writer: mikemason100
    mikemason100
  • Feb 2
  • 5 min read

F-35 on takeoff. Image courtesy of 354th Fighter Wing Public Affairs
F-35 on takeoff. Image courtesy of 354th Fighter Wing Public Affairs

On 28 January 2025, a USAF F-35A was destroyed during operations at Eielson Air Force Base, Alaska. The pilot ejected safely. There was no hostile action and no single, obvious technical failure that immediately explains what happened.


Instead, the Accident Investigation Board (AIB) describes a complex, unfamiliar problem involving landing gear abnormalities, contaminated hydraulic fluid, frozen components, and misleading aircraft indications. This eventually resulted in the aircraft transitioning into an “on-ground” flight control law while airborne, rendering it uncontrollable.


The report identifies ice contamination within the landing gear system as the primary cause. Alongside this, it places significant emphasis on human decision-making, judgement, and failure to anticipate consequences.


As with many investigations, the report is detailed, technical, and authoritative. And yet, as a learning tool, it falls short. Not because the facts are wrong, but because the analysis often stops just as the most useful questions begin.


If investigations don’t help us understand why capable, professional people made reasonable decisions in an unfamiliar situation, then recurrence prevention remains largely theoretical.


What Happened (In Simple Terms)

Shortly after take-off, the pilot’s wingman observed an abnormal nose landing gear door position. The pilot extended the landing gear and noted that the nose wheel was canted approximately 17 degrees to the left.


The pilot followed applicable Pilot Checklist Procedures (PCLs) however none addressed this specific scenario. A coordinated discussion was established involving the pilot, wingman, Supervisor of Flying (SOF), operations leadership, and Lockheed Martin engineers.


Over an extended period, the team worked through possible recovery options using all available information. The agreed plan involved conducting touch-and-go landings to mechanically re-centre the nose wheel.


What was not known at the time was that water-contaminated hydraulic fluid was freezing within the main landing gear struts during flight. This prevented full extension of the gear and caused multiple Weight-On-Wheels (WoW) sensors to incorrectly indicate the aircraft was on the ground while airborne.


Following a subsequent touch-and-go, the aircraft transitioned into an on-ground flight control law in the air. Control was immediately lost and the pilot ejected.


Notably, the report also documents a similar nose gear anomaly during a flight approximately one week later. That aircraft landed safely, and the pilot was unaware that any abnormal condition existed.


Which raises an uncomfortable but important question:

Was the successful outcome evidence of good decision-making, or simply good luck?


The Problem with “Decision-Making” as a Cause

The AIB identifies crew decision-making as a significant contributing factor.

This may be factually defensible, but it is not especially useful as a lesson.


The report itself acknowledges that:

  • The scenario had not previously been encountered in the F-35 fleet

  • No checklist existed for the problem

  • The pilot and supporting team used all available resources

  • Decisions were made deliberately, not impulsively


Yet the analysis frequently leans on counterfactual language:

  • should have

  • could have

  • failed to


These statements describe outcomes, not understanding. They tell us what didn’t happen, but not why the chosen actions made sense at the time. When outcomes are known, alternative paths become obvious. Before the outcome, they rarely are.


Running Out of Checklist

This accident sits squarely at the edge of procedural coverage. The PCLs worked exactly as designed, until they didn’t. Beyond that point, the crew and supporting team were forced to reason from first principles, collaborate, and improvise.


This is not a failure of discipline. It is the reality of operating in complex systems. When investigations criticise decision-making in these moments without acknowledging the absence of procedural guidance, they risk reinforcing the illusion that better compliance alone would have prevented the accident.


Better learning question: How do we support good decision-making when the playbook runs out?


The Maintenance Circular That Matters… After the Fact

The report references a Lockheed Martin maintenance circular issued approximately eight months prior to the accident. This document highlighted potential WoW sensor issues that could affect aircraft controllability.


The implication is that had this circular been considered during the recovery planning, a different course of action might have been chosen. This deserves scrutiny.


If the circular was critical, why was it not immediately available, salient, or actively referenced during a novel emergency involving multiple experts?


This is not primarily a decision-making problem. It is an information distribution and prioritisation problem. High-risk organisations generate vast quantities of guidance, bulletins, and technical notices. If critical information is not surfaced at the moment it matters, it effectively does not exist.


Blaming people for not recalling or applying an eight-month-old document under pressure does little to improve future outcomes.


Better learning question: How does important safety information reliably reach decision-makers when it is needed most?


Were We Lucky, or Were We Good?

The contrast between this accident and the subsequent uneventful landing is telling.

In one case, the aircraft became uncontrollable. In another, a similar issue went unnoticed and resolved safely.


The difference was not intent, professionalism, or effort. It was conditions, timing, and incomplete understanding. When success and failure hinge on luck rather than control, that should concern us more than reassure us.


Culture, Oversight, and Normalisation of Deviance

The report identifies broader issues relating to:

  • Hydraulic fluid handling and contamination controls

  • Hazardous materials oversight

  • Documentation and continuity within maintenance processes


There is no indication that anyone believed they were operating unsafely. That is precisely the problem. Small deviations, when nothing goes wrong, gradually become normal. Over time, the system drifts away from its original safety margins without anyone noticing.


This is not misconduct. It is normal organisational behaviour under pressure. Waiting for accidents to reveal these weaknesses guarantees that learning will always arrive too late.


Why This Matters Beyond Aviation

The same patterns appear repeatedly in business investigations:

  • “They failed to escalate.”

  • “They didn’t follow the process.”

  • “They should have challenged the decision.”


These conclusions feel corrective but rarely change anything. If we want fewer failures, we must design systems where:

  • Critical information is visible

  • Decision-makers are supported under uncertainty

  • Oversight is realistic, not theoretical

  • Success is examined as carefully as failure


Blame feels decisive. Learning is harder. Only one reduces recurrence.


Conclusion: Designing for the Reality of Work

The F-35 crash at Eielson was not the result of careless people making poor choices. It was the outcome of capable professionals confronting a problem their system was not designed to handle.


If we want safer outcomes, in aviation or business, we must move beyond judging decisions with hindsight and start designing environments that support good judgement in real time.


The goal is not perfect decisions.


It is systems that make bad outcomes harder and luck unnecessary.

-----------------------------------------------------------------------------------------------------------------------------------------------------

On Target Co-Founders. Mike Mason and Sam Gladman

Mike Mason and Sam Gladman are the co-founders of On Target, a leadership and team development company that brings elite fighter pilot expertise into the corporate world. With decades of combined experience in high-performance aviation, they specialise in translating critical skills such as communication, decision-making, and teamwork into practical tools for business. Through immersive training and cutting-edge simulation, Mike and Sam help teams build trust, improve performance, and thrive under pressure—just like the best flight crews in the world.


If you'd like to learn more about how On Target can help your team, contact Mike and Sam at info@ontargetteaming.com.

 
 
 

Comments


bottom of page