Just turn it off!

atl final

Several years ago my company had a number of senior pilots transitioning off of traditional “round dial” airplanes, such as the B-727 and DC-10 as the airplanes were being retired from the fleet.  The pilots were moving directly to the left seat of advanced 4th generation glass cockpits, and quite a few did not take to the transition easily.  It was a big jump, although to be fair, the DC-10, and even the B-727, were far more advanced than what was present in the 1960s, with much more sophisticated autopilots and flight directors.  Both airplanes were certificated for Category III autolands, for example.

I was teaching at the time, a Line Check Airman, so I would take students for the line operating experience portion of their training.  For those outside of the airline industry, this is done after the conclusion of the student’s ground training, which includes systems, simulators, a type-ride, simulator “line training” and the rest.  So, technically speaking, my “student” was completely qualified, type rated in the airplane and just needed some actual line experience.  Recall that these were already very experienced pilots, many of whom had been flying large wide-body airplanes around the world for years.  One would think it a simple matter to just integrate the new skills that they had just learned in the simulator with their previous experience.

However, most of these experienced captains would be visibly nervous as they approached the real airplane portion of the training.  Perhaps it is the realization that this is where the “rubber meets the pavement”, in other words, it is the real thing now.  In a simulator a person can mess up and the consequences are not that large.  The real airplane is something else entirely, and there is no “freezing the simulator” or “repeats” possible.  Here a small screw up would entail loss of face, but a larger one means a real violation or worse.  Being nervous is no way to learn.  In addition, I was often given students that were having trouble – they had already had to repeat some portion of their training or even had “busted out” of previous line training.  These students were doubly nervous.

Recall that these are pilots that had no problem just flying the machine.  They had lots of experience and had passed many checkrides.  It was the automation that made them nervous.  So many modes and different ways to accomplish tasks – so many ways to mess it up!  While computers added a lot of functionality, they had also greatly complicated things.  With each additional feature comes more required knowledge to understand that feature and how to use it, but like most everything, the context in which it is being used changes aspects of how we implement it. When we mix in different human responses, different ways and timing of activating features, the environment and design of the procedures, as Yoda might say “confusing it gets”!  This is a textbook example of complexity.

So I developed a routine.  After a bit of small talk to establish where the student was, mentally, based on experience, background, what occurred in previous training, etc., I would tell them “look, all that matters is that you know for sure what the airplane is doing.  I don’t care if that requires turning off all the automation and just hand-flying, or downgrade it, if you are not 100% sure what the system will do next, put it in a mode that you do”.  The stress would visibly vanish off their face.  I would add “figuring out the way to maximize the use of automation is not what we will do while flying the airplane.  I only care that you comply with the clearances, the aircraft limitations and are safe — we can discuss how to best use the automation on the ground”.  I should add that in my experience, these discussions are best over an adult beverage!

In every case my students passed their training and went on to do a fine job in the airplane.  By the end of our time together they were no longer uncomfortable with the automation. They had added it incrementally, while maintaining the base of “actually flying the airplane”.  They knew how to get it to do what they needed, but more importantly, they knew when to disconnect it.  During my time as an instructor I had a 100% success rate for pilots continuing to the line.

In the Spring of 2017, as I discussed in a previous article, I presented a paper with MIT research professor, Dr. John Thomas, on a re-analysis of the Asiana 214 accident in San Francisco utilizing the MIT STAMP method developed by Dr. Nancy Leveson.  In addition to discovering many more causal factors during the course of the work, during the research I found that there were several automation modes that were not even in the flight manuals for the airplane — at least those that are typically provided to the crews.  In addition, none of the interviewed B-777 pilots (including training pilots) understood every automation mode.  There are just too many different combinations.  This is not just true of Boeing, in fact, I believe it is probable that the same is true for all the advanced flight deck airplanes that are currently operating.

The thing that jumps out at me from all this is that there is only one way for a pilot to be sure what the airplane will do, and that is to turn off all the automation, autopilot, autothrottles, and just fly the airplane.  Even then if they are in a fly-by-wire airplane there are aspects that they will not be aware of, but turning off the automation will solve most of the disconnect between their perceptions and what the airplane will do.

What we see, as recently described in an article by Ross Detwiler in Aviation Week’s Business and Commercial Aviation publication, is that instead of turning it off or even downgrading to a lower mode, pilots attempting to “fix” the automation when it is not doing what they want.  I believe he is correct that pilots then tend to focus on that to the exclusion of the “big picture”, and if both pilots get into that mode, particularly near the ground, the outcome is usually bad.

A big part of the reason for this, I believe, is that we believe that it is not behaving the way we expect due to something we did, i.e., programming error or wrong selection.  To be sure, that is often true, but trying to fix it “on the fly” (literally) puts us into a feedback loop.  Breaking out of the mode is not unlike the problem of PIO, which I discussed in a previous article.  It is hard to “let go”, but changing what we are doing is the only way to get out of the feedback cycle.  Unlike a PIO, in this case the cycle is with our cognitive process trying to “solve the problem” rather than just abandon it.   Each solution we try provides a positive or negative reinforcement giving us a dopamine shot like a video game or social media app.   Like I have told many students, the time to solve the problem is not in the air.  Turn off the automation. Do something different.  Like training pilots to exit PIO, this too can be trained. We just need to do it.

===================================================

aoa-book-image1

About Shem Malmquist FRAeS

B-777 Captain. Air Safety and Accident Investigator. Previous experience includes Flight Operations management, Assistant Chief Pilot. Line Check Airman, ALPA Aircraft Technical and Engineering Chairman, Aircraft Performance and Designs Committee MEC Chair, Charting and Instrument Procedures Committee, Group Leader-Commercial Aviation Safety Team-Joint Safety Implementation Team (CAST)-Loss of Control-Human Factors and Automation, CAST-JSIT- Aircraft State Awareness. Fellow of the Royal Aeronautical Society, full Member of ISASI, AIAA, IEEE, HFES, FSF, AFA and the Resilience Engineering Association. I am available for consulting, speaking or providing training seminars to your organization. Please contact me at https://malmquistsafety.com/for inquiries.
This entry was posted in Safety and tagged . Bookmark the permalink.

4 Responses to Just turn it off!

  1. The article is based on sensible point of view. However, automation issue is not only visible at the interface level. There are two other aspects not commonly treated in the human-machine interface discussion: manoeuvrability and systems management.
    In new geneation of aircrafts, technology is embedded and is not possible to disengage it. Even when you fly manually, the computers filters the pilot’s input.
    The computers literally fly the airplane because these are built according to maximum efficiency. It means they are unstable. Only a computer could manage all the variations at high and low speed. During the final approach, especially on Airbus model, where the side-stick is not matched with the other pilot’s one, there are three inputs to deal with: gust, airplane’s correction and pilot’s input. It may lead to de-stabilization. Even if you switch off the autopilot or the auto-thrust.
    On the other side – Systems management – according to the dark panel, you hardly touch the switch, as the overhead panel is based on automatic management of the systems. Once the failure show-up the pilot can hardly spot where and how the switch should be activated. So, complacency arises not only at the Flying skills level but also at the system management one.
    As a further reading see also: https://www.intechopen.com/books/automation/automation-in-aviation

    • Antonio,

      Great comment and I fully agree. Thank you for the link to your work also. I alluded to this when I wrote “Even then if they are in a fly-by-wire airplane there are aspects that they will not be aware of…” but decided not to delve too deeply into it in this article, mostly because I was only trying to focus on what pilots can directly control, but what you wrote about is a good topic unto itself.

  2. glintern says:

    Referring to your STAMP paper (or is it CAST, and what is the difference), you make 25 recommendations.

    Did all contribute to the accident, or was there a small number that was important, and the rest are just nitpicking?

    How is it possible that a system like this can have 25 things wrong with it and nobody notices until there is an accident? For example, you recommend that we should “Create a mechanism to discover and address deficiencies in training, procedures, and manuals, like the missing and conflicting information about A/T wake-up and mode changes.” How is it possible that in an organized system such as commercial aviation, that is not already being done? Much of what you recommend suggests a systemic problem.

    Would fixing these things make the system safer, or might it make the system less safe? You offer no evidence in that paper that any of these fixes would have the desired effect.

    Some of your recommendations are couched in permissive terms (consider). That suggests that you do not think them conclusive.

    Some of your recommendations touch on the complexities of human cognition that no one fully understands. Why do we forget things? How can we make sure we do not forget the important stuff?

    In summary, application of STAMP can find things in a system you may not have seen before, but it is not at all clear that it can lead you to a safer system.

    • Thank you for your comment. It is important as I am sure many others may be having the same questions on John and my paper on Asiana 214. The following is our response to that:

      > Referring to your STAMP paper (or is it CAST, and what is the difference),

      The difference is between the theoretical model, STAMP, and the tools built on it, CAST (Causal Analysis using STAMP) is one of those tools. I tend to be careful using the CAST acronym in aviation articles because it is often confused with the Commercial Aviation Safety Team, which shares the acronym. I am sorry for any confusion that may have caused. We don’t usually talk about underlying theoretical models so people are expecting to hear about processes or tools. STAMP is a model of causality of accidents. CAST is an analysis technique built on that model of causality. STAMP expands the traditional model of accident causality almost everyone uses that assumes accidents are caused by system component failures laid out in a chain of events. This model was adequate for older and simpler systems, but is not enough to explain the types of accidents we are now having in the complex systems we are building today.

      > you make 25 recommendations. Did all contribute to the accident, or was there a small number that was important, and the rest are just nitpicking?

      Yes they all contributed. Every recommendation contains references to the specific causal factors and UCAs that contributed to the accident (and for which the recommendations address).

      There is a strong tendency to try to reduce the causes to only a few “important ones” (probable cause(s) vs. contributory causes, etc.) It also tends to be a strong bias toward decomposition into separate causes. The problem was not all the different factors but that the entire system design did not work together to prevent the accident. There is no difference in importance. The entire accident control structure was ineffective in this case and needs to be redesigned. But this is the systems approach and people are not trained to think that way, i.e., they assume that the causes are independent.

      > How is it possible that a system like this can have 25 things wrong with it and nobody notices until there is an accident?

      Actually, I bet there are more than 25. We were limited to information that was collected and shared in the public NTSB factual report. Several questions were raised by CAST that were never raised or answered by the NTSB. Accidents tend to be investigated superficially and it’s common for only one or a few root causes to be identified (like pilot error), but unfortunately that kind of superficial investigation overlooks deeper and arguably more important systemic factors. Just look at the causal factors cited by the NTSB for the Asiana crash:

      “The National Transportation Safety Board determines that the probable cause of this accident was the flight crew’s mismanagement of the airplane’s descent during the visual approach, the pilot flying’s unintended deactivation of automatic airspeed control, the flight crew’s inadequate monitoring of airspeed, and the flight crew’s delayed execution of a go-around after they became aware that the airplane was below acceptable glidepath and airspeed tolerances.” [Emphasis mine]

      Their assessment of probable cause is really just a list of symptoms that have resulted from much deeper systemic causes. Take the last bit about this flight crew’s delayed execution of go-around for example. If the analysis stops there, what can you do about it? The “cause” has no information about what actually caused the behavior and it doesn’t help explain why the pilots did what they did. It almost sounds like this particular flight crew was simply inept and should be replaced. But did you know that 96% of pilots do not initiate a go-around even when the approach is not stable? This isn’t a problem with this one particular pilot, it’s a systemic problem with the entire fleet. That statistic and conclusion are not in the NTSB report. Trivial conclusions like pilot error can be very tempting when ad-hoc methods are used. What CAST offers is a rigorous methodology to uncover the systemic causes, not just the symptoms. For example, using CAST we found that almost all of the established go-around criteria were met at around 400-500 feet, there were contradictions in the Pilot Operation Manual about the authority for making go-around decisions, and many other issues. None of these seem to be addressed in the NTSB recommendations.

      > How is it possible that in an organized system such as commercial aviation, that is not already being done? Much of what you recommend suggests a systemic problem.

      Yes exactly–it’s pretty clear that there is a systemic problem. But if you don’t apply a rigorous methodology like CAST, that might not be so clear. If you only read the NTSB’s conclusions, it appears this was primarily just a couple of bad pilots. That’s one reason that these systemic factors aren’t being fixed–they aren’t being identified.

      > Would fixing these things make the system safer, or might it make the system less safe? You offer no evidence in that paper that any of these fixes would have the desired effect.

      That is what the modeling and analysis is all about. There is no mathematical equation. But we can show the causes and effects and make a logical argument about how each of these things contributed to the accident — and, in fact, many other similar accidents. Currently we don’t fix the systemic causal factors and keep having the same accident over and over again but don’t recognize it because we stop with identifying “symptoms” of the real underlying problems and never identify and fix the true causes.

      We are not sure what more evidence is possible? We found clear contradictions in the Pilot Operation Manual about who makes go-around decisions, for example, and we recommend they be fixed. I don’t see how any reasonable person would argue that they shouldn’t be fixed or that fixing the contradictions would make the system less safe.

      Another question: what kind of evidence does the NTSB provide that their fixes will have the desired effect (or any other agency’s recommendations in any industry for that matter)? We certainly provided more evidence than the NTSB did for the Asiana crash. We even reviewed our results with NTSB folks, including some who were involved in the Asiana investigation, and they agreed with our conclusions and were glad someone was able to identify the gaps in the official investigation.

      Another problem we noticed is that a large portion of the NTSB recommendations don’t even trace back to their identified causes, suggesting that they are “fixing” problems that don’t exist. This makes a little more sense once you realize the NTSB’s identification of causes and recommendations can be a political and very subjective process. Some NTSB folks have told us that one or two of our recommendations were actually proposed within the NTSB but they were overruled by the board members. That’s another area where CAST can help–the recommendations are not completely disjoint from the analysis. Every recommendation is specifically targeting one or more of the causal factors in CAST and there is clear traceability provided every step of the way.

      > Some of your recommendations are couched in permissive terms (consider). That suggests that you do not think them conclusive.

      Yes, we did use the word “consider” in some places. This is because we did not have enough information to fully assess those recommendations ourselves, but they need to be considered and not overlooked altogether. For example, we found automation behavior involved in this accident that was counterintuitive to the vast majority of pilots (including Asiana instructor pilots, pilots at other airlines, even FAA test pilots and EASA pilots, etc.). This behavior is contrary to the behavior of practically all other aircraft (it’s unique to B777 and B787), is only triggered by an extremely specific combination of modes, and it leads to the automation unexpectedly and automatically disabling a critical safety feature. We spoke to pilots, investigators, and even Boeing engineers but nobody seemed to know the reason the automation was designed to disable this safety feature in this specific situation (which occurred in Asiana 214). We were hesitant to make a concrete recommendation without understanding the reason behind Boeing’s design decision. There may be some valid reason for it, but we couldn’t determine what it is. It was such a counterintuitive and critical automation behavior that we did not want it to be ignored and overlooked. That’s why we used the word “consider”–it’s really an action item for further investigation. The NTSB never asked these questions during the investigation, so we didn’t have enough information to be more concrete.

      > Some of your recommendations touch on the complexities of human cognition that no one fully understands. Why do we forget things? How can we make sure we do not forget the important stuff?

      I’m not sure we need to bring in every complexity of human cognition to create useful recommendations. First, many of our recommendations were very specific things like:
      – R-11: Provide a clear and consistent definition of go around responsibilities. Specify who makes go around decisions, when, and how. Confirm that these responsibilities and procedures are being followed.

      If we weren’t using CAST, we could have said that the pilots simply “forgot” go around responsibilities. But then what? I suppose you’d start wandering through all the complexities of human cognition and forgetfulness. CAST takes a different approach. It provides a clear framework for understanding human behavior and finding actionable reasons why people do what they do. For example, CAST guided us to explore the pilot’s mental model in this case–what did this pilot believe about go-around procedures at the time? We found incorrect beliefs about who is responsible for making go-around decisions, misunderstanding of the altitude criteria for stablized approach, etc. Then we used CAST to examine the contextual factors that created those flawed mental models. We found contradictions in the procedures, operating manuals, etc. for the exact issues that the pilots had misunderstood. The answer then becomes simple and obvious–fix the contradictions (and the other problems we found). You don’t need go perform new research on human cognition to find these systemic problems.

Comments are closed.