top of page

"We've Always Done it That Way"

  • Writer: mikemason100
    mikemason100
  • Mar 16
  • 6 min read
KC-135 at Tinker AFB after an overpressurisation accident.
KC-135 after the accident. Image courtesy of the USAF

Lessons from the KC-135 maintenance explosion

In 1999, a KC-135 tanker was undergoing depot-level maintenance at Tinker Air Force Base in Oklahoma. As part of the maintenance process, technicians needed to conduct a fuselage pressurisation test, a routine procedure designed to simulate the conditions the aircraft experiences during flight and confirm that the structure and sealing systems can safely withstand those loads.


During the test, the pressure inside the aircraft continued to rise and eventually the fuselage failed catastrophically. The structure ruptured violently, tearing open a large section of the aircraft and effectively destroying it. Fortunately, no one was killed, although the aircraft itself was a total loss.


Whenever incidents like this occur, the search for explanations begins almost immediately. Investigations, reports, and post-incident commentary often try to identify the moment when someone “got it wrong,” because locating a specific action or decision can create the impression that the problem has been understood and resolved.


The danger with this approach is that it can stop the learning process before it has really

begun.


The phrase that appears after many accidents

While reading about this event, one detail stood out. An engineer involved in the testing had reportedly been using a homemade pressure gauge rather than the issued equipment, and when asked about it the explanation was familiar:


“We’ve always done it that way.”


This phrase appears in countless accident reports across many industries. It is often presented as a simple explanation for why something unsafe occurred, usually accompanied by language suggesting that someone failed to follow the correct process.


A previous post about this accident on LinkedIn used exactly that sort of wording, describing how individuals failed to follow procedures and failed to use the proper equipment. Language like this feels decisive and clear. It also tends to close down curiosity rather quickly.


If we are interested in learning rather than judging, the phrase “we’ve always done it that way” should trigger a series of questions rather than a conclusion.


The questions that rarely get asked

The presence of a homemade gauge raises a number of possibilities, many of which remain unexplored when the discussion stops at procedural non-compliance. One possibility is that the engineer was simply ignoring the correct equipment and procedures, which would support the familiar narrative that safety problems arise when individuals choose not to follow the rules.


Another possibility is that the homemade gauge actually provided advantages over the issued equipment. It may have been easier to read, quicker to install, or more accurate for the specific task being performed. Engineers and technicians frequently adapt tools in order to make their work more effective, particularly in environments where the official tools were designed for a slightly different purpose.


A further avenue to explore is that the procedure itself was unclear or poorly written. Maintenance documentation often evolves over time, and it is not uncommon for frontline personnel to develop local practices that make the task easier to accomplish in the real environment.


There is also the possibility that the official process existed on paper yet was difficult to follow in practice due to time pressures, coordination issues, or the physical constraints of the workspace.


None of these possibilities excuse unsafe outcomes. They do, however, remind us that work as imagined in procedures and work as performed in reality are rarely identical. Without understanding the reasons behind the local practice, we are left with a very incomplete picture of the system that produced the accident.


The story we like to tell ourselves

When investigations conclude that someone did not follow a documented procedure, the organisational response often becomes quite predictable. The conclusion is framed around compliance, and the solution involves reinforcing the need to follow the rules more carefully in the future.


This is relatively easy to do and, from a distance, can appear logical. From the perspective of learning, it can be deeply unsatisfying. People who work inside complex organisations generally believe they are doing the right thing at the time. Their actions make sense within the environment they are operating in, which includes the tools available, the information provided, the expectations of their peers, and the practical realities of getting the job done.


When a technician says “we’ve always done it that way,” they are usually describing a practice that has developed over time and has been accepted by the surrounding system. That practice may even have been visible to supervisors, trainers, and other teams for years without attracting attention.


If the practice truly represented a dangerous deviation, the more interesting question becomes why the organisation allowed it to persist.


Normal work creates the conditions for failure

Accidents rarely arise from a single unusual action. They emerge from patterns of normal work that gradually create hidden vulnerabilities. In this case, investigators determined that the aircraft’s pressure relief valves had been secured closed during earlier maintenance work, removing an important safety defence that would normally prevent the aircraft from being over-pressurised during testing.


The presence of a homemade gauge adds another layer to the story, yet the limited information available leaves significant gaps in our understanding. The relationship between the gauge, the procedures, and the configuration of the aircraft is not entirely clear, which makes it difficult to determine how the various elements of the system interacted.


What can be said with some confidence is that several different conditions aligned during the test, and those conditions had been created over time by a range of perfectly ordinary activities.


The engineers performing the test were not trying to destroy an aircraft. They were conducting a routine maintenance task using the tools, information, and practices that existed within their environment. From their perspective, the situation likely appeared familiar and manageable.


The role of leadership and culture

The real opportunity for learning lies not in identifying the last action before the fuselage failed, but in examining the organisational environment that allowed these conditions to develop.


Leaders often emphasise the importance of following procedures, and there is good reason for doing so. Procedures capture hard-earned knowledge and provide structure for complex operations. At the same time, experienced organisations recognise that procedures cannot perfectly capture the complexity of real work. Teams will inevitably adapt tools and methods in order to cope with the demands of the environment, and those adaptations can either strengthen or weaken safety depending on how well they are understood.


When local practices such as homemade tools or informal workarounds begin to appear, they should be treated as valuable signals about how work is actually being performed. Those signals offer an opportunity to ask whether the official system is supporting the people who must operate within it.


Ignoring those signals allows small gaps between “work as imagined” and “work as done” to grow larger over time.


Final thought

The phrase “we’ve always done it that way” often appears within accident investigations as though it provides a complete explanation for what went wrong.


In reality, it should the beginning of a much more interesting set of questions:


Why did the practice develop in the first place?

What problem was it solving for the people doing the work?

Was the official process practical in the real environment?

Did leaders and supervisors know about the adaptation?

Was the system designed in a way that made the safer path the easiest one to follow?


Without exploring these questions, we risk replacing genuine learning with a simple story about individuals who failed to follow the rules.


The KC-135 that ruptured in the hangar was not destroyed by a single decision made on the day of the test. The conditions for the event were shaped over time by the way the organisation designed its processes, tools, and communication between teams.

Understanding those conditions is far more valuable than identifying someone who “failed to” do the right thing.


The goal is not to prove that people made mistakes. The goal is to understand how normal work can quietly build a path to an unexpected outcome.

-----------------------------------------------------------------------------------------------------------------------------------------------------

On Target Co-Founders. Mike Mason and Sam Gladman

Mike Mason and Sam Gladman are the co-founders of On Target, a leadership and team development company that brings elite fighter pilot expertise into the corporate world. With decades of combined experience in high-performance aviation, they specialise in translating critical skills such as communication, decision-making, and teamwork into practical tools for business. Through immersive training and cutting-edge simulation, Mike and Sam help teams build trust, improve performance, and thrive under pressure—just like the best flight crews in the world.


If you'd like to learn more about how On Target can help your team, contact Mike and Sam at info@ontargetteaming.com.

 
 
 

Comments


bottom of page