By Captain Shem Malmquist
AN FSI COMMENTARY
We make a number of assumptions about automation, the good, the bad and the problems. I believe that it is time to put some of these to rest if we are to actually prevent future accidents. It is time for a new paradigm in how we think about automation and the types of problems that result from it. In this article I will challenge a number of assumptions that have been stated so often they are now accepted as fact. Software design has absolutely led to accidents, just perhaps not in the way most people think. Many (most?) have been missed entirely even after accidents. This article will highlight one such designed-in risk factor and offer a solution to that issue.
While the October 29 Lion Air 610 accident investigation runs its course, the release of the FAA Emergency AD immediately following the crash has opened up valuable new discussions on the role of automation. The industry is being forced to recognize that when modern airplanes crash the problem is not necessarily lack of airmanship, commonly referred to as “automation dependency,” but rather the opaqueness of the actions and logic of the automation itself. Perhaps it is time to revisit commonly held assumptions such as “automation dependency,” which essentially blames the pilot and implies pilots are complacent and, instead look at the assumptions underlying the automatic systems themselves.
The AD issued in the aftermath of the Lion Air 610 accident highlighted how the loss of a sensor for an advanced system can create very difficult scenarios. Consider the 2008 case of Qantas 72 (an Airbus A330). Here a faulty angle of attack (AoA) sensor led to the aircraft envelope protection (limit) feature attempting to prevent what the computer’s process model saw as a stall condition by rapidly lowering the aircraft pitch. Similarly, on November 5th, 2014, a Lufthansa A321 experienced a wild ride following a physical problem with the AoA probes. In another event, a Boeing 777 experienced some extreme pitch gyrations on August 1st, 2005, as a result of an erroneous angle of attack sensor, as reported by the Australian Transport Safety Board (ATSB)[i]. None of these were related to pilot competency in hand-flying. In fact, all three would have been much worse if pilots were not on board to save the day.
The focus on “stick and rudder” skills and worrying about automation dependency has been repeated so often that we accept it at face value. We emphasize the need to hand-fly more. Now, don’t get me wrong. I love to hand-fly, and will often hand-fly the airplane below RVSM (reduced vertical separation minimum) airspace if the workload permits. The regulations limit my ability to hand-fly above RVSM (flight level 290) in general. As I don’t want to overload my first officer, I will couple it up when it’s busy, which is generally IMC, or operating in complex environments (ATC procedures, metric altimetry, etc.). However, as much as I enjoy hand-flying, is it really helping me to handle things when they go wrong? I am not so sure.
First of all, I am flying the airplane in a normal state. The B-777, like other fly-by-wire (FBW) airplanes, has very consistent handling qualities. The pilot does not have to adjust for differences due to changes in CG, gross weight, flap settings, density altitude, q-factor, and a multitude of other factors that affect the way an airplane responds. FBW takes care of all that. It makes the airplane really easy to fly – as long as it’s working. Problems, such as an erroneous AoA signal can unexpectedly put the airplane in a degraded state. The handling qualities are going to be different, and, depending on the mode, the system may no longer be compensating for all those differences previously discussed. The pilot will have to do it, but is the pilot equipped to handle that, PLUS now having to hand-fly in a “complex environment”? What about those newer pilots that have little, or no experience hand-flying at the higher altitudes?
The issue here is that arguing about pilots lacking the skills to handle the aircraft when the automation fails misses the point. Accidents are not occurring due to lack of pilot skill, or certainly not at any greater rate than they ever have. At the same time we have an argument rooted in a similar set of misconceptions, but this time from some pilots. These pilots argue that they need to exceed airplane limits to “save the day”. This debate about envelope limiting vs. protection is mostly an “Airbus vs. Boeing” debate. Both sides are wrong.
As most pilots know, airplanes such as the Airbus FBW utilize “envelope limiting” while Boeing FBW utilizes “envelope protection”. Many anti-Airbus pilots will argue that they want to be able to exceed a limit in an emergency. I am not going to enter the debate on that directly, except to point out that in 30 years of operations with FBW Airbus with hard limits I know of no accident that could have been avoided if the limits were allowed to be exceeded. There are several known cases where the hard limits prevented an accident, however.
Some will point to events such as the June 26, 1988, Habsheim, Air France A320 accident. A careful analysis of that event shows that allowing the pilots to exceed the pitch right into a stall (it was on the edge of a stall being limited from going further) would only have resulted in a very momentary “bump” in altitude, to be followed rapidly by a steep sink at a higher pitch attitude and rate into the trees. Not a great outcome, and it certainly would not have prevented the crash that actually took place. More thrust was what was needed here. The story is similar on other events.
Of course, on the other side, there have been problems as a result of envelope limiting, the Qantas 72 example previously mentioned is a good example of one, as was the Lufthansa A321 and there have been others! So the problem with “hard limits” on flight controls is not so much that they prevent pilots from exceeding them, but that they can take an action due to erroneous data or a missed assumption that the pilot cannot override without taking unusual steps. However, going back to the “hard limit” debate, we know that some pilots have been quick to want to exceed g-load, bank, pitch or angle of attack limits even though there is no evidence to support the need to do so. It is interesting to consider, then, that as far as I know, nobody ever has expressed concern about the digital electronic controls we use for all modern jet engines. Here is another example of conventional wisdom missing a larger potential issue.
Whether we refer to it as FADEC (Full Authority Digital Electronic Control), EEC (Engine Electronic Control) or any other name, these systems limit the engines to maximum rated thrust. Apply firewall power (throttle against the stops) and the system will automatically limit it to maximum thrust, with some small exceptions. In the older engines with mechanical fuel control units we had to watch the throttle advancement to ensure we did not exceed any limitation, but it was also possible, in most circumstances, to shove the throttles forward to obtain 15% more thrust than the engine was rated for, or even more. Sure, that meant the engines might need to be inspected, or even trashed, but that thrust was available. Given the choice between hitting the ground or burning up the engines, I think all pilots would take the latter! Why has this issue not been raised?
How many accidents could have been prevented had the engine’s controller allowed the pilot to exceed the limitation? I must add a caveat that many factors are in play here, including spool up time. If the engines were not able to reach the maximum rated thrust in the time prior to the accident, they would not be able to reach a higher thrust level either. With that said, examples that come to mind to investigate are:
Habsheim. While the issue was not the pitch limit, as I discussed, 15% more thrust might well have saved the day.
Asiana at San Francisco (2013). Adding 15% more thrust there may have been just enough to miss that 13 foot sea well. That’s right, just 13 feet, and in actual fact they probably just needed less than half that.
American Airlines going into Cali (1995). The report stated that retracting the speed brakes would likely have prevented the accident. Would more thrust have been available at that altitude? Was there adequate spool-up time?
You may be able to think many more examples that are better than these, but it is possible that quite a few accidents were the result of a design decision to create software that was more focused on extending engine life than saving an airplane in an extreme situation. To reiterate, I have not done any performance analysis on any of these. Those that worked performance for these accidents should have the data on a spreadsheet and it would not be hard to calculate. It might turn out that these three accidents would still have occurred regardless of the availability of extra thrust. Focusing on that would miss the point. Rather, the point is that there clearly are times when extra thrust would be a good thing, even at the cost of an engine. Examples are EGPWS escape, windshear escape, late recognition of impending CFIT, and many more cases.
How might we design this? I would suggest looking at the MD-11, with its “FADEC bar”. It is a mechanical stop that, with an intentional extra forceful push, allows the throttles to move a bit more, a higher “throttle resolver angle” is fed into the electronic controller. The engineers were thinking correctly when they designed it. Unfortunately, the most it can do is revert the FADEC to an “alternate” mode, which essentially means that it is not relying on actual temperature and pressure, but a “default” setting.
Pushing through the FADEC bar will never result in a decrease in thrust, but could potentially increase thrust up to as much as 10%. The key word here is “could”, because depending on the actual conditions, it may already be as high as it will get. I am proposing a system like the “FADEC bar” that allows us to truly increase thrust, beyond the engine design limits. An extra 15% or more, perhaps much more. I want the ability to intentionally (and only with conscious action) push the engines well above the design limits, risking catastrophic damage – as not doing so will likely destroy the engines along with the rest of the airplane anyway. What about the engine acceleration (spool-up) profile? Could that also be modified to allow for more rapid possible acceleration under dire circumstances? While my inclination is that the spool-up time is probably a physical limitation, it does not hurt to ask the question.
There has never been a better time to start thinking about improvements to design that would afford pilots more control when they actually need it. Operating right up to the limit on angle of attack and stopping it there is an excellent use of automation where a human is just not going to be able to gain more performance, but adding in a limit that only protects the design limits is a different story. I’d also like to hear from pilots that do not like “hard limits.” Are you satisfied with engine systems that limit you artificially?
This article was originally published in the Curt Lewis Flight Safety Information newsletter. http://curt-lewis.com/category/newsletter/. I highly recommend subscribing to it.
[i] ATSB report number 200503722