A FSI Commentary
Editors Note: While the Lion Air 610 investigation continues, it’s important to remember that hindsight bias is no substitute for understanding potential combination system failure events. We welcome reader response to Captain Malmquist’s commentary.
On October 29, 2018, Lion Air flight 610, a Boeing 737 Max 8, crashed shortly after takeoff from Soekarno-Hatta International Airport in Jakarta, enroute to Depati Amir airport. The aircraft was brand new, only having been in service for two months.
The day prior to the accident flight maintenance had been performed due to an airspeed and altitude indication problem and an elevator feel differential pressure light illuminated. There were also reported problems with the angle of attack sensing system. In signing the items off, a term familiar to most pilots was used, terms such as “test on ground ok” and “test on ground satisfied”.
In the immediate aftermath of the accident, Boeing sent out a bulletin which was quickly adopted by FAA as an airworthiness directive to follow the Runaway Stabilizer non-normal checklist. The bulletin stated that in the event of erroneous angle of attack data the stabilizer may trim nose down in increments lasting up to ten seconds.
The runaway trim exercise is commonly taught in simulators. It is a simple matter, in the Boeing, to counter the trim movement with control column input, and then shut off the stabilizer trim switches thus removing electrical power from the system. In these simulator events it is just one system, the stabilizer trim, that has gone wrong. The motor has gotten “stuck” in the “on” position, and that is driving the stab trim motor. It is moving continuously and so the problem is obvious. Just remove the power. However, the Boeing bulletin points to something a bit different. Here we can now have a faulty sensor input into a computer system that misleads the system to do the wrong thing. It adds trim in increments, so there is not continuous motion as one would expect from a faulty trim system. This was not something that could affect earlier models of the B-737. It appears that the feature to add nose-down trim was new to the Max 8 version. It also now appears, based on recent reports, that few pilots flying the aircraft were aware that the aircraft had an angle of attack protection feature built into its trim system.
We do not yet know what other systems might be impacted here. Many systems are connected to the angle of attack sensor, including flight instruments and warning systems. If, as reported, the pilots were not aware of the system functionality, it would not be surprising to find that they reacted incorrectly to it.
We will learn more as the investigation progresses. In the meantime, there have already been multiple articles and comments indicating that the issue here was pilot competency. I would like to remind all pilots that these types of accidents are rarely due to lack of pilot skill. It is all too easy, as pilots, to think “I would not have done that.” The response seems obvious with the clarity of hindsight bias. Part of this arrogance stems from the way we train.
In the simulator the events we see are fairly simplistic. They rarely involve one system failing in a subtle way, cascading into multiple other systems. In 1974 a Northwest Airlines Boeing 727 crashed after stalling due to blocked pitot tubes due to ice. Many wondered how an experienced flight crew might not recognize that they were stalling, after all, it was well known that one outcome of iced pitot tubes could be that the airspeed increases with climb, much like an altimeter. The higher indicated speeds led to the crew pulling the nose up to try to prevent an overspeed, but shouldn’t that have been obvious? As outlined in the NTSB report, the Board suspected that the elevator feel pitot tubes were also blocked. This creates a confusing situation where the airspeed indication is increasing, and that can be coupled with the control forces acting as if the plane is flying faster as well. This then mixes with overspeed warnings as well as stall warnings. A very confusing situation.
We do not know, yet, all that the Lion Air crew had to contend with. We do know that these combination failure events can lead to a situation that can be very difficult to sort out. We also know it is hard for a pilot to handle a situation where the airplane is doing something outside of what they know to be possible. Instead of this being another “I would not have done that” conversation, use it instead to start delving into your aircraft systems. Knowledge is power. In this case, knowledge can keep you alive.
Captain Shem Malmquist is a veteran 777 captain and accident investigator. He is coauthor of Angle of Attack: Air France 447 and The Future of Aviation Safety and teaches an online high altitude flying course with Beyond Risk Management and Flight Safety Information
I totally agree with Shem: when you get out of known territories, deep knowledge – especially of your machine – can be the key to survival. The concept of “need to know” is a trap. A simpler model of the world may be cheaper, but not faster nor better. You do not know what will be “necessary”. And you will not necessarily use this extra knowledge directly: as a catalyst and in interaction with experience, it will build a meta-knowledge, a kind of operational wisdom that will allow you to say: no, that does not exist.