Certification and limits

My previous article discussing limits on electronic engine controls elicited a number of very interesting responses that went into several different directions.  I thought I might share some of these with you as well as some additional thoughts.  Unless specifically authorized, I will not be including the names or affiliations of any of the commentators.

The general concept I proposed was to eliminate the artificial limitations we have imposed on pilot ability to operate their aircraft.  In that article, I highlighted the engines, which are limited to rated thrust in most circumstances.  This is not just true for jet engines, and I have received several reports from pilots with stories of reciprocating engines being limited, as well as turboprops.  Limiting inadvertent exceedances is a very good idea as it protected engine life. The concept of allowing for a way to exceed those limits in a controlled way in a dire emergency is what is being proposed.  Most all pilots like the concept, although I must say the following from Graham Hamilton (HKG) was probably the best response!

Yes, spot on! as a man who once went very close to ‘cooking the engine’ in avoiding a baboon on take-off in Tanzania, there is definitely a point where your knuckles need to go through the panel and pour as much fuel on the fire as you can – this might shorten or even ultimately destroy an engine but it might be saving the aircraft in doing so. 

…anecdotally ‘Engines can be replaced, that’s what insurance is for – no amount of insurance can replace a human being.

One comment concerned the issue of pushing up the power with an engine inoperative and perhaps running into limits of controllability.  There are two thoughts for this.  First, in future designs (actually already done on some current production aircraft) the engine control could be integrated with the flight control system to limit the thrust as required to maintain control.  For most aircraft it would fall to a training issue.  One pilot suggested a three-tiered approach, where thrust would first just slightly exceed normal values, but if there really was no other option, then there would be a way to increase thrust even more.  A system as simple as a breakable safety wire could be employed here.  Historically there have been such mechanisms for military aircraft, and many corporate aircraft have a setup similar to the MD-11, allowing for some measured increase depending on circumstances.

It is my opinion that software should still limit it at a point where immediate failure is a virtual certainty.  Software can be utilized to vary this as required as engines age as well.  Several pilots commented on envelope limiting.  This is a bit more complex and potentially devolves into the “limiting” vs. “protection” debate.  That said, my approach would be similar.  We know the airplane should be good for 150% of the g-values, so we might design the flight control system to allow the nominal values to be exceeded up to the ultimate load limits when intentionally desired by the pilot (the algorithm could even include factors to reduce the ultimate load due to age of the aircraft and similar factors)?  The same for pitch or roll.  This is far different than the aerodynamic limits for, for example, stall, where the software can do a better job of putting us right on that thin performance line and holding us there.   Real physical limitations are a good use for software protections as opposed to regulatory ones.  So, in some respects the “hard limits” are better, but perhaps not in others.  If it were up to me, I would borrow a bit from both philosophies.  We still do need a way to give the pilot quick control if the system is incorrectly sensing an exceedance, however.  The pilots need to be part of the “voting”, specifically both pilots should be, together.  The reader is reminded that the second pilot is not onboard to be a “backup” for the other pilot, but rather due to the multiplying effect of a shared mental model.  Two people working together is not just twice as good, but many times better.

Exceeding normal limits, whether due to potentially bad data or a perceived need to “save the airplane”, should be a very deliberate action.  A dual pilot action.  The pilot flying should have to confirm with the other pilot, who then activates the ability to exceed a value.  Yes, there might be some corner case where a pilot is incapacitated (or happens to not be available at that moment), but the risk of loss of control in general is certainly higher and, while it should be analyzed, it is my belief that leaving the ability to exceed normal values to a single person would add more risk than it reduces.  While procedurally this is something that could be (and is often) done in any event through pilot announcements of intentions and pilot monitoring confirmation, there are too many cases where airplanes exceeded limits unintentionally while both pilots were attending to something else.  A designed-in limiting feature is clearly supported by the data.  Just having heavier control forces to exceed a value does not seem to be enough when it comes to pitch and roll.  We still see unintentional excursions beyond those values.  Adrenalin, perhaps?  There may be many reasons, but that is not the point – the fact remains that it is happening.

This brings me to the final aspect out of the comments, the arena of the use of automation and the concepts of automation dependency.  It is my experience that we do not have a problem with pilots having the skills.  Focusing on that will likely not change the accident rates.  As I said in my previous article, it is my belief that pilot skill is no more of a factor now in accidents than it has ever been. Rather, we have a different problem, or perhaps, set of problems.  One is that we have a situation of “too many cooks”, but it is worse than that, as one pilot said, it is not just “too many cooks”, but that the pilot is often not even aware that there are other cooks!

There is more.  In ASIAS data we see events that are not related to the pilot’s skill or background.  Experienced pilots who have a strong military tactical and civilian background still find themselves in situations that conventional wisdom would say “only happens to inexperienced or weak aviators”.  No, there is something else going on here and one that has, so far, defied conventional wisdom.  Part of it is related to automation surprise, and the automation doing something different than what was expect.  However, in many incidents, the automation is doing exactly what it is designed to do.  It appears that the pilots became distracted in the task of trying to get the automation to do what the pilot wanted the system to do.

This problem will need a new approach, one that has not been identified.  The conventional wisdom is falling short.  I cannot now locate it the citation, but the situation reminds me of reading what Dr. Richard Feynman described in explaining why he believed most Nobel prizes were awarded to people who had done the seminal work and discoveries in their 20s.  AS I recall, Feynman believed that they had not become “set in their ways”, so to speak.  They did not just accept things as true, but were willing to challenge assumptions.  We need that now.  Perhaps we should turn to those researchers in their 20s who have not become set in their ways of thinking?

Finally, as reported in the Wall Street Journal and other publications concerning the Lion Air accident and MCAS, there is quite a bit of difference between training manuals and guidance between different operators and between different countries, “At least one European 737 MAX operator said it was briefed on the MCAS system before delivery of the plane. In regulatory documents, Brazilian authorities identified MCAS as a system that differs on the 737 MAX from earlier versions.”[i]   It is interesting that the Brazilian authorities took a different stance.  While I have no way of knowing how that came about, it occurs to me that, in general, it would be beneficial for each country to take a more critical view of aircraft design features.  There are too many states that defer to FAA and EASA and just, essentially, rubber stamp their findings.  As good as FAA and EASA are, the diversity of perspectives by having higher scrutiny from agencies around the world, each bringing their own cultural perspective to the problem, might just capture things that would otherwise be missed.  Now that I have gotten all the OEMs mad at me, I should add that the benefit of this would be reduced liability.  A bit more pain on the front end but the pay-off is potentially huge.

Preventing is the vaccine – reaction is the anti-biotic.  We need both.  This applies to all aspects of this problem.

 

[i] After Lion Air Crash, Regulators Examine Differences in Training Manuals; U.S. airlines’ manuals for new 737 models offer inconsistent or confusing details of a second automated antistall feature, critics say, Wall Street Journal, 2018

About Shem Malmquist FRAeS

B-777 Captain. Air Safety and Accident Investigator. Previous experience includes Flight Operations management, Assistant Chief Pilot. Line Check Airman, ALPA Aircraft Technical and Engineering Chairman, Aircraft Performance and Designs Committee MEC Chair, Charting and Instrument Procedures Committee, Group Leader-Commercial Aviation Safety Team-Joint Safety Implementation Team (CAST)-Loss of Control-Human Factors and Automation, CAST-JSIT- Aircraft State Awareness. Fellow of the Royal Aeronautical Society, full Member of ISASI, AIAA, IEEE, HFES, FSF, AFA and the Resilience Engineering Association. I am available for consulting, speaking or providing training seminars to your organization. Please contact me at https://malmquistsafety.com/for inquiries.
This entry was posted in Safety. Bookmark the permalink.