top of page

When the System Can’t See the Risk. Lessons From the LaGuardia Runway Collision

  • Mike Mason
  • 1 day ago
  • 6 min read
The Air Canada Express Jet, the morning after the collision. Image courtesy of Reuters.
The Air Canada Express Jet, the morning after the collision. Image courtesy of Reuters.

It all starts routinely. It was March 22, 2026 and at 23:35, Air Canada Flight 8646 is cleared to land on Runway 4 at LaGuardia airport, New York. It’s on about a five-mile final on a normal descent profile.


At roughly the same time, fire vehicles are responding to an unrelated emergency. Their route will take them across that same runway. They call the tower. The first call is blocked and they try again. This is nothing unusual.


When the aircraft is about a mile and a half away from touchdown the controller asks which vehicle needs to cross. Truck 1 requests the crossing and about 20 seconds later the clearance is given: “Truck 1… cross Runway 4.”


The aircraft is now about a quarter of a mile on final. Truck 1 reads the clearance back and starts moving. The aircraft continues the approach and very soon after, it crosses the threshold. The truck is still approaching the runway.


At 23:37:12, the controller calls: “Stop.”


This call is not ignored. It’s just late. At 23:37:17, the aircraft touches down and the truck crosses the hold short line.


At 23:37:20: “Truck 1 stop.”


Catastrophically now, both are committed. The truck enters the runway and moments later, they collide.


On paper at least, the system was working. Air traffic control was active. Procedures were being followed and multiple operations were being coordinated in a normal but complex, high-tempo environment.


And yet, tragically, two people died and forty more were injured.


This wasn't the result of someone making a single catastrophic mistake. It was more because the system allowed a risk to exist that it could not properly see, track or manage, and that distinction matters.


When the system can’t see the risk, it can’t manage it. As a result, it (inadvertently) is forced to accept it.


1. When the System Doesn’t See It, It Doesn’t Exist

There’s nothing unusual about a vehicle being on a runway. There’s nothing unusual about aircraft movements being tightly sequenced. There’s nothing unusual about controllers managing multiple moving parts at once.


However, when these elements come together, the results can become somewhat more unpredictable than expected. In many cases, ground vehicles don’t present to the system in the same way as aircraft. They don’t show up on the same displays, they don’t generate the same alerts, and they rely heavily on voice communication to stay integrated into the picture.


That can create a gap in Situation Awareness. It is rarely a dramatic one. It hardly ever immediately stands out. At the same time, a gap where the system is no longer fully aware of everything matters.


As Situation Awareness starts to reduce, the risk doesn’t disappear. It just starts to become less visible. And once it’s harder to see, it’s no longer being managed in any meaningful way.


We see the same thing in business. Different teams working off different data, different systems, different assumptions. The information still exists, but it’s fragmented and much harder to interpret accurately. Risks are present, but they aren't visible to everyone who needs to see them.


2. When Communication Feels Like Understanding

At the centre of this event is communication. Instructions were passed. Associated acknowledgements were passed back and operations continued.


If you only scratch the surface, this will look like control. But it isn’t because communication is not the same as shared understanding. This is especially true when under pressure and when multiple things are happening at once and when people start filling in the gaps with assumptions. Systems will start to drift. Calls are made but not fully processed. Pieces of information are missed or only partially heard. A mental model forms that no longer reflects reality, but still feels coherent to the person holding it.


From that point on, people are not making poor decisions. They are usually making reasonable decisions that are based on the wrong picture. And that’s a very different problem.


It’s also one that shows up everywhere outside aviation. Teams believe they are aligned because something has been said, or sent, or acknowledged. In reality, they are operating with subtly different understandings of what is happening and what really matters.


By the time the gap becomes visible, it might be already too late to close it cleanly.


3. When Complexity Creeps Past Control

Make no mistake, LaGuardia is not a simple environment. Very high traffic density, tight spacing and constant pressure to keep things moving. The controllers are balancing multiple streams of activity, making continuous adjustments to keep everything flowing. They are without doubt, experts in their field. They are very good at what they do and the vast majority of the time, what they do works.


Paradoxically, that level of individual ability can make the system dangerous. Systems that work most of the time don’t feel fragile. They feel efficient.


As complexity increases. With more movements, more dependencies, more coordination, the margin reduces. Nothing usually breaks immediately. There’s no clear tipping point. It's just a gradual shift where the system is working harder and harder to stay in control and this shift is extremely easy to miss.


Looking in from the outside, performance still looks good. From the inside, the system is running closer to the edge than anyone is comfortable admitting. Or even perhaps knowing, as they become more results-focussed, often as the pressure builds.


We see this in organisations all the time. Growth, scale and efficiency drive complexity up. Systems, processes and structure don’t always keep pace. Work becomes more interconnected, more time-sensitive, more dependent on coordination and everything still gets delivered.


Which reinforces the belief that everything is fine. Until it isn’t.


4. When Parts Of The System Know. But The System Doesn’t Move

Concerns had already been raised. There were signals (more obvious with hindsight). There had been conversations. There was certainly awareness that parts of the system were operating closer to the edge than they should have been.


Perhaps because we're often focused on outcomes, nothing fundamentally changed. Beyond what I wrote earlier, this perhaps isn’t a failure to identify risk. It’s more a failure to respond to it. And it’s a common organisational weaknesses.


Organisations are often quite good at spotting issues locally, individually. People raise concerns. Teams see friction. Data highlights patterns.


But unless those signals are joined up and acted on at a system level, they don’t lead to change and they just become part of the background noise. If left unchecked over time, that has the potential to create a dangerous narrative:


“If it was that serious, something would have been done.”


In reality, the opposite is often true. The longer a risk exists without consequence, the more normalised it becomes.


Final thought

As with so many things we talk and write about, this wasn’t a failure of individuals doing their jobs badly. A system allowed risk to sit outside its own awareness and relied on communication without ensuring shared understanding. It continued to operate as complexity increased, and didn’t act on signals it had already seen.


None of that is unusual and that’s the far bigger problem.


The most dangerous systems are not the ones that are obviously broken. They’re the ones that look like they’re working. Right up until the moment they don’t.


If you’re a leader…

The useful questions aren’t about this event. They’re about your system.


Where are you relying on information you can’t fully see?

Where does communication get mistaken for understanding?

Where is complexity increasing faster than control?

What risks are already known, but not acted on?


Remember, when the system can’t see the risk…

…it can’t manage it.



-----------------------------------------------------------------------------------------------------------------------------------------------------

On Target Co-Founders. Mike Mason and Sam Gladman

Mike Mason and Sam Gladman are the co-founders of On Target, a leadership and team development company that brings elite fighter pilot expertise into the corporate world. With decades of combined experience in high-performance aviation, they specialise in translating critical skills such as communication, decision-making, and teamwork into practical tools for business. Through immersive training and cutting-edge simulation, Mike and Sam help teams build trust, improve performance, and thrive under pressure—just like the best flight crews in the world.


If you'd like to learn more about how On Target can help your team, contact Mike and Sam at info@ontargetteaming.com.

 
 
 
bottom of page