A new approach -Thoughts on Automation

computer models

There is not a new transport airplane built that does not rely heavily on computerized systems.  Automation has been the “name of the game” for several decades now, with each new generation adding more to it. Pilots are now accustomed to operating aircraft that contain a great deal of automation.  This has lead to well known phrases such as “the children of the magenta line,” referring to pilots who are focused on just following the automation, and more technical terms such as “automation dependency” and similar concepts.   We all know what these terms mean, and we see it all the time, but is it really what we think it is?  Is the problem really lack of basic stick and rudder skills or is something else afoot?  Certainly, nobody would argue that stick and rudder skills do not become weaker as automation is used, but is that really the problem that leads “automation dependent” pilots to loss of control events?

The first aspect that I will discuss is what I see really occurring.  I do not see pilots that are unable to fly the airplane.  What I do see is pilots that keep messing with the automation in an attempt to “fix it” so it does what (think) they want.  In that process they get distracted, lose focus and end up in unexpected scenarios.  The key here is really ensuring strict discipline.  The pilot-flying needs to focus on flying the airplane.  The pilot-monitoring needs to ensure the pilot-flying is doing what they are supposed to do.  My suggestion would be to teach both pilots to focus only on the aircraft path during any dynamic situation.  Any changes to configuration, turns, initial climb and descents, altitude capture, etc., should involve both pilots focusing on flight instruments.  Further, if something is not going as expected, immediately degrade the automation to a point where both pilots know with certainty what it will be doing next.  That may involve turning off all the automation.

That brings us to the second issue that, although not a new one, really was highlighted with the Boeing Max accidents.  This goes into pilots needing to have a fundamental idea of what computers are really doing, and what they are not doing.  We tend to think of computers that are integrated into our aircraft as another hardware component.  It does the job, or it is failed.  Computers are different, but nobody is talking about it.

It might appear to be helpful to anthropomorphize the computer slightly to get a better sense of the situation, but that quickly becomes convoluted.  Computers are quite simple “creatures” though. They are hooked up to sensors to “sense” the factors that the people who designed them deemed important.  If the designer was a bit short of imagination then the computer might be missing vital information in some scenario that was beyond what the designer imagined.  Assuming that the data is all being collected as designed, that data goes into the computer which uses a “process model” to decide what actions to take.  The programmer has attached the output of the computer to various things that the computer is, literally, controlling.  Unlike a living organism, a computer is totally unable to deviate from its programming. It cannot come up with a new or novel solution, it just simply follows its instructions.  “I know xyz” and based on values of xyz I perform abc and that is all.  There is no nuance here.

So what happens if the data coming in is flawed?  What does the computer do? It does exactly what it was programmed to do, no more, no less.  If the designer anticipated the data problem the computer should do something rational, for instance, it might stop all further actions and alert the pilot that it cannot do its job.  However, if the designer did not anticipate it, then there is no way for the pilot, in real time, to be certain of what it might do.  Sure, any person with knowledge of software could look at the code and the data and tell you what it might do in that scenario, but that is of little help when a new scenario is discovered mid-flight.

Now recall the data inputs on an aircraft system, say a flight control computer.  It needs airspeed, angle of attack, flight control positions, flap positions, it might need CG, g-loading (Nz), mach numbers and more.  It takes that information in, and then, based on what commands are given to it by the pilot, runs it through a process model and then out to the control surfaces it has control over.  These can include elevators, ailerons, spoilers, rudders, flaps, slats, trim, etc.   The output is dependent on the input coupled with the programming, which becomes the process model.  OK, hold that thought for a moment.

What is your procedure if something happens on takeoff that is not in the books?  No QRH, or at least, no immediate action items.  Let’s say a sensor failure, a loss of angle of attack, a loss of  the g-force or even the inability of the computer to read a flight control position.  You have some sort of fault indication (maybe) right after V1, what would you do?  Most procedures have you do something like the following:  Continue the takeoff, positive rate, gear up, get to a safe altitude, clean it up, then troubleshoot (maybe), or perhaps just continue to the destination.

All nice, except for one little problem.  That process model is flawed now due to bad data.  The computer is unable to “know” the correct actions.  As soon as you change any aspect then it might result in an unexpected outcome as the computer mixes the new information of what you have changed with the bad data and all that goes into the software for a new “decision” of what output to perform.  This is, in a nutshell, what occurred on the Max accidents. Pilots retracted the flaps and BANG, MCAS was activated.  A change coupled with a bad input in a scenario not anticipated in the design.

MCAS is not the only “gremlin” out there like this.  There are others lurking.  For example, something as seemingly innocuous as the pilot giving the computer commands in a way that was not anticipated by the designer can yield some unexpected outcomes.  So what can a pilot do?  One thought is to consider the way computers work.  If the airplane is flying ok, maybe it is worth considering not changing anything. No change to the configuration, no change to anything that is within the pilot control.  Just keep it flying, and when any changes are made, be prepared for an unexpected outcome.

In a legacy airplane this was no problem.  Changing the flaps or the landing gear would be unlikely to entirely change the way the airplane handled.  Changing the flaps would not suddenly trigger secondary systems in unexpected ways.   That is no longer true.  There is simply no way to train pilots to understand all the possible ways that every system might react to every circumstance.  Arguably even the designers might not have thought about all of them.   So I  would argue that our procedures are not keeping up with the changes to our aircraft architecture.   Until they get caught up, pilots need to have a much better understanding of how computers work and how they interact with the world around them.

 

About Shem Malmquist FRAeS

B-777 Captain. Air Safety and Accident Investigator. Previous experience includes Flight Operations management, Assistant Chief Pilot. Line Check Airman, ALPA Aircraft Technical and Engineering Chairman, Aircraft Performance and Designs Committee MEC Chair, Charting and Instrument Procedures Committee, Group Leader-Commercial Aviation Safety Team-Joint Safety Implementation Team (CAST)-Loss of Control-Human Factors and Automation, CAST-JSIT- Aircraft State Awareness. Fellow of the Royal Aeronautical Society, full Member of ISASI, AIAA, IEEE, HFES, FSF, AFA and the Resilience Engineering Association. I am available for consulting, speaking or providing training seminars to your organization. Please contact me at https://malmquistsafety.com/for inquiries.
This entry was posted in Safety and tagged . Bookmark the permalink.

6 Responses to A new approach -Thoughts on Automation

  1. To some extent, you invert the reasoning…the computers, or rather the interface of the computers should have a greater understanding of pilots behaviour, expected, anticipated, reactive and so forth. Pilots are technicians of sorts and, like all of all people have mental models that correspond to how technology operates, responds and reacts according to scenarios. Should the technology fail to correspond to the impressions and expectations of the mental models described, the reasoning within the pilot has, accordingly failed to integrate. Boeing MCAS reflects all of this perspective. The technology did not coordinate responses according to the pilots mental models that the technology should have in both accident scenarios. Design was the prevalent concern within the reactions of the pilots….not the pilots failure to understand the technology….

    • We definitely need to design with a better understanding of what humans will do and for that I advocate STPA (from STAMP) as the best path forward. However, pilots also need to have a solid understanding of how computers work. There is no way that we can teach pilots about every system out there and all the sub-systems, so creating a general framework of understanding is a better path forward.

  2. Steve Burchill says:

    It is not a consecutive argument that teaching pilots about computers in a solid sense as you advocate undermines the role of the interface / interaction design and the perspective informing human behaviour. There are other ways of training pilots about automation rather than the inner mind of the hardware/software designers. Fundamentally, again mid flight I am not sure a pilots requisite knowledge of the mechanics principally will alter any action or reaction and consequent outcome. What is required is the understanding that the coordinated expectations of the mechanics deliver the expected outcome. It is similar to a formula 1 driver necessarily having to understand a gasket (do they still exist?) and it’s nuts and bolts, construction and material formula, including its metallurgical complexity to understand that it is the cause of the breakdown mid-race. It will not inform the outcome of the drivers decision about the next hairpin bend in any particular way? Stamp (STPA) are particulars of a broad range of human centred methodologies which have a heightened profile at the moment. They are useful currently because they can be learned and assimilated by a range of aviation personnel. They can be employed as stand alone instruments and also integrated with other techniques according to disciplines

    • Regarding pilot knowledge, that is true. That is why I am advocating a more global method that does not require specific knowledge of the nuances to prevent a bad outcome. As for STPA, it is far more than just a human centred methodology. In fact, its roots are in system theory that was specifically created to manage computer-controlled systems and complex system architecture.

  3. Larry D'Oench says:

    “If the designer was a bit short of imagination then the computer might be missing vital information in some scenario that was beyond what the designer imagined.”

    I agree with Shem that designers, and also we pilots, can be short of imagination. Case in point when I flew MD-11s I never hand flew above the mid 20s because the plane was relatively unstable (pilot technical term would be “squirrely”). Sadly, at the time it didn’t occur to me I should practice high altitude hand flying in case of autopilot failure. I was short of imagination. So, in those idle moments think about the unthinkable as well as the likely and possible.

Comments are closed.