What a problem solver can learn from the unfolding 737 Max story

What a problem solver can learn from the unfolding 737 Max story

Apr 4, 2019

Over the past five months, two Boeing 737 Max airliners have crashed, killing 346 people. Although the story is still unfolding and the investigation is underway, preliminary evidence suggests that design flaws are the cause of the crashes. Here’s a recap of what we know as of April 4, 2019, and reflections on what we can learn from these crashes for solving complex problems.



What we know about the crashes:


  • Boeing fitted the Max, the latest version of their 737 airliner, with bigger engines to increase fuel efficiency. That modified the balance of the plane. The MCAS (Maneuvering Characteristics Augmentation System) flight control system dynamically corrects this imbalance by pushing the nose of the plane down when it senses that a stall is imminent.

  • However, in the past five months, two Boeing 737 Max airliners have crashed, and the MCAS is under scrutiny as a possible cause for the crashes.

  • The MCAS receives data from a sensor telling it the angle of attack of the plane. The sensor is known to have fed false information in at least one of the two crashes (Lion Air).



What we can learn about the crashes:


First, I want to acknowledge the tragedy of these two accidents and their effect on hundreds of families. My thoughts go to them as they are coping with irreparable losses.


Also, I want to make it clear that I do not blame Boeing or any other company or person for the crash. In this article, I simply react to an incomplete body of evidence and present preliminary observations. I will need to update my observations as new evidence surfaces.


From a problem-solving standpoint, the Boeing 737 Max accidents provide valuable learning lessons:



Fix the cause, not the symptom. Reports indicate that the 737 Max is aerodynamically unstable because its new, bigger engines have modified its balance. To address the issue, Boeing adjusted the airplane’s software to enable quick corrections of impending stalls. However, in doing so, they addressed the issue’s symptom, not its cause. Addressing symptoms and not causes can be an effective problem-solving approach, but only if you have a failproof way to treat the symptom. The lesson: For high-stakes problems, either stay far away from danger or ensure that your last line of defense is undefeatable.


Being MECE isn’t always desirable. “MECE” is an acronym for “mutually exclusive and collectively exhaustive.” MECE thinking is a way to structure how we look at a complex system by ensuring that we account for everything exactly once. When you think in MECE terms, you have no overlaps and no gaps in your options. In many settings, MECE thinking is desirable and rightfully celebrated. However, MECE thinking has known limitations. The Boeing 737 Max accidents provide one example of MECE’s known limitations: The MCAS’s reliance on a single sensor, as opposed to redundant ones, possibly contributed to the fatal crashes. Sometimes, redundancies are desirable. In the case of the Boeing 737 Max, due to the lives at stake, we want the key components of the plane’s systems to not be mutually exclusive. The lesson: Deliberately assess whether a system should be MECE; when it doesn’t or shouldn’t be MECE, make it deliberately not MECE.


Get someone to check your biases. Pilots received little to no training on the new systems. It appears that the MCAS requires swift action before it places the plane in an unrecoverable dive. In fact, the pilots seem to have only 40 seconds to see the issue and respond. It’s surprising that a third party or regulator didn’t oppose Boeing’s request to keep training minimal. It is notoriously difficult to identify one’s blind spots. The lesson: Engage an independent third party to check your blind spots, give your third party the latitude to speak freely, and take their criticism as opportunity to improve.


Update your thinking. The investigation isn’t over. Preliminary evidence points to the MCAS as the root cause of the accidents, but we cannot and should not forcefully conclude that this is the case at this time. Neither should we conclude that Boeing’s behavior caused the crash until the investigation closes. The lesson: Don’t jump to conclusions and don’t fail to revise your thinking as new evidence surfaces. In these cases, a Bayesian approach can help.

Understanding what caused the crashes will not repair the tragic losses that many have suffered. However, I hope that extracting lessons from these crashes can help us think and behave in ways that systematically make us better at avoiding similar circumstances.