I came across this article today which I had written quite a few years ago. How many years ago? Well, TWA was still in business! I should add that my more recent article, here, is a worthy follow-up if you’ve not read it, as is this one. That said, here is the article, as originally written.:
At least once each year we each return to fly the simulator. The “game” in the simulator is fairly predictable. We know what to expect most of the way through – we are more surprised when something has not failed or gone wrong than when it has! Although the simulator ride itself is fairly predictable, that probably does not significantly detract from the experience, and we will carry our training to the line. This is largely true because we are really practicing the procedures themselves as much or more than how to fly the airplane.
So, here you are, flying the simulator, and the instructor tells you that windshear has been reported in the area. You are now spring loaded to go for the recovery and at the first indication of windshear you go through the escape drill. But how much is this like the real world?
Recently I was discussing this issue with a friend of mine, TWA. Captain Steve Holmes. Steve also teaches in the simulator for TWA. It turned out that we had each encountered a dry microburst on approach, and coincidentally they were while flying into the same airport, Salt Lake City (SLC). Another thing that both of us found similar was the insidious nature of the encounter. Our experiences were virtually identical.
Approaching the Salt Lake area there were some buildups in the area, along with visible VIRGA. The conditions at the airport itself looked fine, so we continued the approach. No windshear had been reported. On final the air was smooth and as we descended on the ILS in virtually unlimited visibility all seemed normal. The microburst did not slam us all at once, unlike what I have usually experienced in the simulator. Instead, I found that we kept getting a little bit slow, so had to add a bit of power. Not a lot, just a bit, but after a while we had a whole lot of power. At some point it became clear that things were not right and a missed approach was executed on short final. The aircraft performance on the go-around was sluggish to put in mildly. Go around power and we just barely held altitude until near the departure end of the runway.
The point here is that it was not immediately clear that this was a windshear encounter. Captain Holmes and I both felt that it was almost like being “sucked in”, in that everything felt and seemed fine until we were fairly committed to flying through the event. In both of our encounters we felt that we were directly under a newly developing microburst, so encountered the decreasing performance portion only. He found it particularly disturbing that it could be so easy to get “sucked” into this type of situation as he trains crews to avoid and escape from such encounters in the simulator. He is very familiar with the procedures and warning signs.
It might be possible to train windshear differently, but that is not the purpose of this article. It is simply not possible to train for every possible combination of circumstances we are likely to encounter. The responsibility for a safe operation will always rest with the flight crew, and it is our knowledge and experience that keep flying safe.
I am going to shift gears a bit and move into a discussion of resilience engineering. I was able to attend the recent symposium of the Resilience Engineering Association (http://www.resilience-engineering-association.org/) of which I am a member. A common thread of my articles is to increase knowledge so we can increase the ability to respond to unexpected scenarios. Organizations also can exhibit resilient behavior, and can create conditions to enable their “sharp end” personnel to also behave in an resilient way. So what is resilience?
There continues to be a challenge balancing compliance and flexibility in order to create an adaptable organization with adaptable components. A large part of the challenge of this is fully understanding the goal of where, or really, what a resilient organization acts like, or even what resilience is. After digesting a lot of material, a good metaphor occurred to me while watching my 14 year old perform a leading role in the play “The Lion King”.
In a play the baseline is the story, which is winding towards a known end point (at least known to the players and the director). The story is turned into a script and further broken down into scenes. This is akin to our design of the system, the aircraft or hub operation or any other system we are trying to manage. The aircraft, building, etc., become “the set”. Even a computer is filling that roll as it is relatively fixed in how it responds. The script are the procedures we follow. The actors work to follow the script and if all goes right everyone and every component does exactly what we expect, it is “work as imagined”, literally. The director is, of course, the manager (s) or management.
However, in the real word, things do not go right all the time, or even most of the time. A prop or set might not work quite right, or an actor might forget their lines. What do the other actors do? They improvise. They adapt to make the story work towards where it is supposed to go where all the players and director know it needs to. They may fill in complete sections, re-route around obstacles, what ever it takes to make the story work to the ending that they all know. This is despite a portion of the set “getting stuck” and remaining on stage, or perhaps not being there when it should be. This is despite another actor completely missing their lines, their cues or perhaps even not showing up at all!
I think that is an example of the adaptability we want. It requires being ok with things going off script sometimes as long as we reach the end of the story where things turn out the way they were intended. It also requires actors that are knowledgeable enough and have the skill set to improvise when they need to, pulling the divergence back into the story line of the script. True graceful extensibility, as described by David Woods as “how a system extends performance, or brings extra adaptive capacity to bear, when surprise events challenge its boundaries”. If done right it is seamless, literally, graceful.
The way to get there is by creating an atmosphere where people know that they are not judged negatively if something pushes them off script, but rather how well they can keep the story moving towards the end we desire. We don’t create such rigid barriers that once we break through we cannot get back either. If the control (to use Nancy Leveson’s term) is so coupled into the operation that the loss of the control makes it impossible to control through other means then we have not done our job. An example might be envelope protection features on an airplane that are so good that pilots no longer have the skill sets to operate without them. Pilots should not be judged in a simulator on how closely they followed the procedures, but rather how well the entire play was able to reach the proper conclusion.
Unfortunately, this is diametrically opposed to much of the current push in aviation, towards more compliance, specifically “procedural compliance”. This is setting us up for failure. We are emphasizing staying on script, not improvising as required to get the story to end where we need it to.
I am honored to have been a guest on the 401st podcast for airplanegeeks.com. Great group of folks who are working hard to get accurate aviation information and news out to the world. I discussed how an aircraft accident is an emergent property that arises out of a complex system which makes accidents generally not possible to predict utilizing traditional methods. This is also related to the the work I am doing with Roger Rapoport and our book “Angle of Attack”, still in work.
Complexity is an interesting topic and not one that many people- even those involved in risk analysis or in the business of utilizing data to make predictions – understand. It was somewhat entertaining when I spoke on this topic in Pasadena a couple of weeks ago that one could pick out the physicists and other scientists in the audience as their faces lit up when I mentioned the topic of complexity. It is not a new one for physicists! An interesting aspect is that the same issues that that lead the inability to make predictions regarding accidents are, at their core, the same reasons that virtually no political pundits were able to predict that Trump would win the Republican nomination. People laughed but agreed when I pointed out that Trump is an emergent property of a complex system!
Coming back to the podcast, here it is and I encourage a listen! Captain Shem Malmquist on the airplanegeeks.com podcast.
We have become accustomed to the idea that we can do more with less and flying is no exception. Improvements in technology first allowed airplanes to fly without navigators as long range radio navigation then inertial systems and most recently GPS coupled with simple computers have simplified the task. The navigator’s job consisted of a combination of taking measurements off the stars and the sun with basic calculations of the projected aircraft path considering speed, wind drift and the like. All of this was easily solved with the advent of calculators that could do trigonometry coupled with systems that could determine the current position, and it was thus a simple matter to combine the two. While humans are removed somewhat from the process in this the pilots were trained well enough in the principles of navigation so that they could see if the calculations were not as they should be, much like a person using a calculator should be able to discern if the answer is far off the mark.
Independently, improvements in basic electronic systems and the simplification of system design allowed for the elimination of the flight engineer. While many people may view this as “automation”, most of the jet transport designs currently operational do not include much in terms of having computers run the systems with the only notable exception being the McDonnell-Douglas (now Boeing) MD-11. Improvements have been made in alerting systems to notify pilots of problems although most of these systems are not particularly “smart” but rather list problems in the temporal order in which they are triggered by the system, with the exception of the MD-11 which does (to an extent) rank the most critical items at the top of the list. The systems have not so much been automated as improved such that the engineering design of systems are so simple they require almost no human action most of the time.
The MD-11 does include features that the other airliners do not, such as automatically reconfiguring systems to work around inoperative components or isolate problems, but it still will reach a point where it will defer to the human operator to make a decision. An example is that it will shut down one hydraulic system due to overheat, but not two. The designers considered that the decision to take a system beyond a certain point is too dynamic and dependent on circumstances. What one might need to do while mid-ocean is far different than near an airport, for example. Still, as most routine systems are automated the pilot is put into a monitoring mode. If the system takes an action it is designed to notify the flight crew of that action. In most cases the pilot does not have to take additional action but just consider the situation in terms of the inoperative components on flight planning. However, like the person with the calculator it is very important that the pilot has a clear understanding of how the system works and how it should be operated. The automation is not doing anything that the pilot would not be required to do absent the automation but the reliability of the system could lend a person to not do the mental work needed to understand the system.
Further, as the automation is able to remove the workload of operating a complex system there was less incentive for the engineers to simplify the system as was done on some of the other advanced designs such as Airbus or Boeing models. The MD-11 systems are thus more complex, relatively. This allows for the system to do things that are beneficial that the others do not do, for example, the MD-11 will sense fuel temperature and if it reaches a certain point it will move fuel from warmer tanks to colder tanks in order to prevent the fuel getting so cold it will no longer flow smoothly. In the B-777 if the fuel temperature does get dangerously cold the pilots must use a combination of flying faster (increasing the temperature by friction) or descending to a lower (hopefully warmer) altitude. However, the other side of this is the complexity can make it more difficult to understand. There have been cases where the MD-11 system was properly reconfiguring fuel pumps and valves to keep the fuel in close balance and the pilot, not understanding the nuances and thinking that it was doing the wrong thing, turned the automatic controllers off resulting in an engine failing due to lack of fuel. The system was smarter than the pilot, and we are back to the person with the calculator that does not know the process well enough to know if the answer is correct or wildly off.
The autopilot systems have improved somewhat, but are still not all that more of a complex problem than a cruise control in a car. The system essentially looks at what the pilot has commanded it to do and then adjusts the controls to put it there. The main difference being that the pilot can command it to follow an electronic signal from the ground or a path from the navigation system. This functionality is not new, and while new autopilots do a better job it is not much different than the systems available in the 1960s and 70s. The addition of autoland in the 1970s was also relatively simple in terms of just including the radio altitude into the mix so that the autopilot could then adjust the controls to maintain a programmed trajectory. Obviously pilots will monitor this very closely as any failure or bad signal can put the airplane in a dangerous state. The autopilot is not smart enough to discern that things “do not look right”, although some of the more advanced systems do disconnect when certain parameters are exceeded – leaving it to the pilot to “save” the airplane.
A pilot can also program the entire route, from departure to the approach to landing, prior to taking off. Still it would not be unlike the ability to program your automobile’s cruise control to drive certain speeds at various portions of your route, it is just that the “cruise control” would also have control over your steering. It is really just following a programmed script. The system contains a database of points or a new point can be created via the latitude and longitude and these are just entered in the order the pilot wants the system to follow. Altitudes and airspeeds that the pilot wants the system to follow can also be added to each point in the “flight plan”. Despite the public impression, these systems rely heavily on human input and monitoring just as would a cruise control in your car that was programmable. It is not possible to anticipate hazards, for example, so the traffic jam, icy road or other aspect would require human intervention.
There are currently several research projects that are looking for ways to further improve designs to that aircraft can be flown with one pilot or no pilots at all. The primary incentive might appear to most to be financial, and perhaps it is, but the promoters of these systems argue that it is to improve safety. They argue that the majority of airplane accidents are the result of human error and therefore by eliminating (eventually) humans flying airplanes we can achieve safety improvements not possible otherwise. It was with this philosophy that Airbus first started designing limits to what pilots could do in their airplanes. They had found that there was little benefit to allowing pilots to overstress the airplane or exceed certain bank angles or pitch attitudes, that pilots had not needed to exceed these conditions to prevent a problem but rather had only done so inadvertently. Thus, the flight controls were designed to not allow a pilot to do so. Boeing had not initially agreed, but the evidence was overwhelming and so the newer Boeing aircraft, while not entirely preventing such excursions, do make it much more difficult to do so. To be fair, pilots can also take measures to do so on the Airbus designs as well.
All of this is not particularly an issue. Designing a system so that one cannot do the wrong thing is far different than designing the decision making out the system. A simple example is an automobile lock that is designed so that the car door will not lock a person out of the car or the system that prevents the car being started unless the brake is applied, or even the system that rings a chime if the lights are left on or a seatbelt is not fastened. All of these do prevent errors but do any of them change the ability for the driver to make a decision? Arguably, they do not. The same is true for the current systems in airplanes, so there is some merit to the concept that better system design can eliminate many types of errors.
The aviation industry has been quite good at this, redesigning airplanes and systems to the point that most of the simple errors can be eliminated. Those errors were often caused by a momentary distraction, an attempt to rush through a procedure or by poorly written guidance. Through identifying and then eliminating these possibilities we have created a system where the chance of having an accident is now extremely low. The fatal accident rate has steadily been reduced but in the past few years appears to have reached a plateau. As system and equipment design as improved the failure rates have dropped and that has left just one primary cause of fatal accidents – human error.
The problem with this position is quite simply that it is wrong. As pointed out by leading researchers such as Sidney Dekker, Erik Hollnagel and others, it is not that humans are making errors but that the remaining gaps are so dependent on human resilience to prevent accidents that those times when humans are not able to do so we view it as an “error”. Think of it in terms again of the automobile. Anti-lock brakes have certainly saved lives, as has improved signage on roadways, better designed highways, grooved pavement for high speed, banks on curves, better traffic signals, designs of automobiles that eliminate blind spots and improved visibility to other drivers and a myriad other things but safe driving still depends on human skill, particularly awareness. We have used technology to eliminate the more simple problems but the larger ones remain. As British Psychologist Lisanne Bainbridge pointed out, “the designer who tries to eliminate the operator still leaves the operator to do the tasks which the designer cannot think how to automate”.
Now imagine we design systems that eliminate crashes and the like. Take for example Google’s self-driving car. Certainly it can drive automatically, motion detectors and a very good navigation system can be supplemented with updates of position as it passed roadways. It can avoid obstacles and the like, so what might create a problem for the current system? Have you ever driven down a road and saw an issue that was only potentially a problem? Imagine looking down the road and seeing activity that you recognize as two street gangs facing off against each other. The road is otherwise clear and to the Google car it is just a normal situation. It does not notice the tell-tale signs of a street gang or perhaps an angry mob so will carry you on right into the middle of it. These are the types of issues difficult to solve with current technology. These are the types of aspects that humans are still far better at and an airplane has many more than a car does.
In an airplane the set of issues which are difficult to program are larger. Take for example the smell of smoke. In the car there is no need to program anything, the occupant pushes the emergency stop and you’re done. In the airplane it is not so simple. First, the system would need some way to detect the smoke, and this too is not so simple. Back in the car if you smell something a bit odd you can go on driving waiting to see if it manifests into real smoke. In the airplane the first indication might be just a subtle change in odor. As there are large amounts of air moving through various systems odors change all the time. However, a review of real events have shown that waiting until it was clearly smoke is sometimes too late to prevent catastrophe. Then there is the issue of what the options are. Flying over the North Atlantic at night is it better to ditch or try to make it to the nearest airport? Is it better to depressurize the airplane and slow the burn rate from the fire (depriving it of oxygen) to be able to survive the flying time to the nearest airport or is it better to dive to a lower altitude so passengers do not run out of oxygen? Can a computer make such a decision better than a human?
Other things that are difficult to program include subtle aspects such as a very small change in sound or vibration. Is that a serious change or not? It takes a lot of experience to be able to discern the difference. Then there are aspects that would require engineers who design systems to better understand, such as meteorology.
All of this ignores the issue of power. If there is something that interrupts the electrical power of the airplane and its components, how is the computer flying the airplane to be protected? A good example is fire that is burning through power systems. So until computers increase to the point of being at human-level and the power issue is resolved the concept of replacing humans pilots in airplanes with computers is not realistic. Once we reach human level artificial intelligence then there is another set of issues which make more of this entire topic likely moot, as will be discussed land the end of this article.
So what about the idea then of leaving a single human in the cockpit to capture these issues?
Humans are subject to a number of cognitive limitations and historically it was considered that having more than one pilot improved safety because the second pilot could catch the first pilot’s errors. True there are plenty of aircraft that do operate with a single pilot, which include everything from military aircraft to very sophisticated private aircraft. Smaller charter type airlines also operate this way in many cases so clearly the workload issues could be worked out, at least during routine flights. However, the accident rates for these categories are much higher than would be accepted for mass transportation. Partly this is due to people making mistakes, misperceiving things and the like, but as we create systems to capture these we still see accidents. Why is that?
It is the general view that humans are error prone and that led to the idea of removing humans entirely out of the equation in the first place. The generally accepted reason adding a second pilot improves safety is that a second person will notice these errors and speak up about them. Indeed, that is the basis for the crew resource management (CRM) training that was instituted starting sometime in the 1980s. The concept was that by training pilots to speak up when they noticed a problem we could solve many issues, and underlying this is the assumption that in many cases the second pilot did notice an issue but did not point it out for various reasons. These could include fear, power-distance, not wanting to “make waves” or even anger. There were perhaps some actual cases of this and so CRM training became “the fix” to solve this problem. Simultaneously, equipment became more reliable, the design of procedures became better and systems were installed that would warn pilots of dangerous conditions, such as approaching terrain, windshear, too steep of a bank angle, approaching a runway on the ground, a collision risk and so on. In addition, ground based systems were installed so that air traffic controllers would also get alarms and be able to shout a warning for many dangerous situations. And, accident rates did go down.
So, was it the CRM that led to this improvement? The truth is that the evidence to support such a conclusion is weak at best. It is not to say that CRM is a bad thing, but rather that it may have not made as much impact as its proponents desired. The same can be said of other programs. One example was a program that essentially centered around the concept that the more precise a person attempts to be in all areas of their life the “smaller the target” will be and the less likely they will be to deviate from what they intended to do. Certainly not a harmful concept, but there is no evidence it has any correlation to accident prevention. Other emphasis is easier to see, such as improved diet, hydration and, of course, mitigating fatigue. Fatigue can definitely be a problem and people do make more errors when tired, but the entire approach is again based on the assumption that errors are the problem when the real issue is the loss of resilience as discussed previously.
That said, obviously a second person can help capture errors, but the real value comes in that a second person, if the pilots are properly trained to work together, create a shared mental model and then act, essentially, as one mind but one with multiple senses. That also has a multiplier effect on the experience level and the ability to cope with unusual situations. This does not just double the resilience found in one person, but rather magnifies it. Humans are able to accommodate variability and two well trained humans working closely together are better than one. More can even be better as Captain Al Haynes described after surviving the Sioux City accident.
To make this work the system would need to be smart in that it would anticipate what the human needs. That is something the other pilot is doing, they are not only able to react, but actually anticipate the needs and actions of the other pilot. Computers are rather limited at this currently, “auto-correct” being a case in point. Would we really want a virtual computer “co-pilot” that reacts as “auto-correct” does?
The proponents of the concept counter by stating that they can have a person serving on the ground as a “virtual co-pilot”, ready to assist if needed. The person on the ground would attend to multiple flights under the premise that only one might have an issue at a time. This makes one wonder. Have you attended a virtual meeting? Even under the best of circumstances there are limitations and subtle cues are missed. Hand gestures, facial expressions, many other nuances would be lost. In reality, the person on the ground is working as a dispatcher. Dispatchers are already part of the decision team for safe flights for all major airlines so we are not adding something particularly new.
So the disadvantages of this scheme would be to lose the “shared mental model”, because no matter what sort of data connection there is the second person on the ground would not actually “be there”. They would not be experiencing the sensations, they would not have “skin in the game”. Even assuming that there was some way to transmit some of those aspects they would still need continuity. Not being completely immersed in what was happening until there was a problem would not be unlike the situation with the Air France 447 Captain who did not come up to the cockpit until after the airplane was stalled. He had almost zero chance of sorting out the issues. If we want to fix that then the person on the ground would need to be virtually “in the cockpit” for the entire flight, which would mean that they would not be virtually able to be in multiple cockpits simultaneously. Of course, once we have done that we have saved nothing.
Finally, there is, of course, the problem of someone who is suicidal or similar. The Germanwings case highlights that issue, and there is no known psychological testing that would ferret out that sort of issue reliably. In sum, it is clear that the impediments to both pilot-less and single pilot transport aircraft are larger than most realize.
So what about artificial intelligence? There is certainly no question that once computers reach human level cognition that they will be able to fly an airplane as well as a pilot can. They would need appropriate sensors to pick up subtle odors, vibrations and sounds, but that is not a difficult problem. Methods could be devised to ensure they are powered, so that issue is also surmountable.
It is estimated that computers of this level could be operational in the next few years, although others believe it will be longer than that the overall consensus is that they will be up and running by the end of the century. Is this something pilots should be concerned about?
The answer is “perhaps”, but the real truth is that once human level intelligence is created the world will be so completely changed who flies airplanes is likely not to be a large concern, even for those that currently make their living at it. The reason is that this level of artificial intelligence referred to as “artificial general intelligence” (AGI) is very unlikely to be like a Hollywood movie.
Current technology includes a lot of what is considered “artificial intelligence” or AI. This level includes predicting words on a smart phone or tablet, and numerous other applications. Google’s new car is in this regime. These systems are able to “learn” on their own and as they “watch” what you do and so improve their performance. Cool stuff.
It seems to follow that if you create an intelligent enough computer that is learning on its own as some point it will be much equivalent to human level, and as humans we tend to anthropomorphize objects so we think of it as being much like it would be human-like. Indeed, it would mimic many human traits as it would be logical to design it to speak, etc. However, as much as it might seem to be human, it would not be. A computer is not even a biological organism. Tim Urban has a very good discussion on this topic, and one illustration considers a spider and a guinea pig. He points out that if one were to make a guinea pig as intelligent as a human it might not be so scary, but a spider with human level intelligence is very unlikely to be friendly or a good thing. A computer is not even a biological organism so is actually more different. Tim points out that a spider is not “good” or “evil” as Hollywood likes to portray things, but is rather neither, it is just different, and likewise would a computer. It reacts to things based on programming, but once it can self-learn and at that level, its motivations are based only on what its job is set to be. A short example of how this could go wrong is illustrated in this SMBC comic.
Humans are social animals, primates with the social structure of termites. We survive as a result of that social aspect. Termites have evolved to “know” that any individual will sacrifice itself to prevent the destruction of the hive. Altruism and self-sacrifice is not something we attribute to insects, but the fact is that termites, bees, ants and other social insects will take actions that in humans we would consider altruistic. The fact is that by being social animals humans are able to succeed where non-social animals could not. As a result we have evolved to be social and our “programming” reflects this. It might be described very roughly as follows:
The last items might or might not occur, many will protect themselves before helping strangers. The programming varies. Very few would not give their life to protect their children, their spouse and immediate family and we are programmed to keep it in that order. The entire point of all of this is to ensure our genes survive, and it is better to ensure that even a bit of your DNA makes it (you will have relatives most likely in your tribe, nation, etc.), and it seems that the stronger the DNA connection (or in the case of a spouse, the likelihood of ensuring your genes survival) the stronger our will to do anything to protect them is, even at your own expense. Of course, intrinsic in all of this is self procreation.
Obviously, a computer only has the traits we have programmed it to have so as much as it might appear to be human, it really is not.
This leads into the larger concern. Computers today process information millions of times faster than humans but still lack human intelligence. This article is not intended to get into the technical aspects of how our brains are structured, but suffice to say the structure of our brains allows for ways of processing information and connections that computers are not yet capable of. Once they are, though, they will be combining that faster processing with those connections. Connect to the internet and things happen fast.
Consider a computer with this capability and the ability to learn. It starts as a toddler but an hour later has the ability and knowledge of Einstein, and an hour after that is has the combined knowledge of all the great thinkers combined. Unlike us, it has constant access to ALL of that knowledge and a much faster processing skill. The problems that take us years to solve or appear without resolution are likely to be trivial for it.
Is there any doubt that it could rapidly know ways to make itself even faster and smarter? Does it have the ability to adjust its pathways and improve its structure based on its knowledge? If we give it that ability, or it figures out how to do it on its own, then the intelligence can increase even faster. Above AGI is artificial super intelligence, or ASI. In this realm we are looking at a computer that is not just a little more intelligent than us, but instead millions of times. This is a system that might realize there is a way to manipulate matter, time or space. It would not be limited to our perceptions of reality. The trouble is that it is farther above us than we would be to an ant, and we might not be relevant to it or just a nuisance.
All of this is such a tremendous game-changer that who is flying our airplanes becomes somewhat of a trivial issue. ASI could lead to solving all the problems of humanity or the end of humanity. People like Stephen Hawking and Bill Gates are very concerned. Elon Musk is so concerned he states he spends a third of his waking hours thinking about it. This while running several companies! Hopefully it will turn out well. If it solves all of the problems of humanity than all of us may be able to live just doing what we want without any real need for work. If it goes badly then none of it will matter either.
The bottom line here is that we might see a push for single pilot or even no-pilot airplanes, but if we do it is based on a fundamental misunderstanding of what the issues are and where the risks lay. We might automate the basic functions but that would still leave us vulnerable to the real “corner-point” scenarios that lead to actual accidents. Contrary to popular opinion, most accidents do not follow a simple linear causal chain. It would be safe most of the time, true, but not as safe as the public demands today. It might plateau to safety levels reached in the 1970s or so. Reaching the higher safety levels now demanded by the public and regulators would require AGI, and once we reach that point the outcome moves in directions that are beyond the ability to reasonably anticipate.
As pilots we need to make decisions based on reality. Anything else can lead to a very bad outcome, but what is reality and can we perceive the difference between reality and perception?
We live in a probabilistic world. This may seem counterintuitive, and it may be true that there is an objective reality somewhere – a deterministic reality where an input leads to a very clear output – but even if that is true (and it is not clear that it is), we cannot perceive it as such as we are limited as to what we observe by our own senses.
Is that color depicting the oceanic areas on the map at the top of this page blue? This perception will depend on the individuals color perception. A person that is color blind might perceive that area differently than someone who is not, but what the color blind person perceives is the reality to them.
Each person differs in their individual perceptions, and thus, their concept of reality is tainted by deficiencies in their own sight, hearing, sense of touch or equilibrium. Their perception is further impacted by biases that they might have (as discussed in my previous post here) and other factors such as fatigue.
Human reaction is based on what we believe is most likely to be true given the combination of what we perceive as filtered through the factors described above. If we are aware of something that might impact our ability to discern objective reality (such as a color blind person described above might do) then we will adjust our estimate of the probability that we are correct accordingly.
As a pilot we are even more removed from reality than most as we perceive a world that is far from what we evolved to be able to perceive. There are many examples of this. Accelerations distort our reality and we need instruments to be sure which way is “up”. On landing in a big airplane we will perceive motion based on what our bit of the airplane is doing which can be quite different than what is happening at the landing gear.
Pilots are particularly dependent on the probability of the situation but must be constantly aware that what they are perceiving might not be reality. Are our instruments correct? Is the sense of acceleration due to our pitch attitude or our forward acceleration? Is that apparent increase in height due to our local changes or is the airplane actually changing height? We cannot trust anything that we see but must weight all of it based on conditional probabilities. Is X true given Y? Are there other factors that will make it not true? As pilots we make these determinations through experience and it would be the rare pilot that would calculate the probability using something like Bayes theorem. Regardless, we constantly need to second guess our assumptions and reassess as new information comes in. Is there evidence to support what we believe to be true? How reliable is that evidence? Is that actually evidence or just what we want to believe? These are the types of issues that can lead a pilot to pressing on into bad weather or with insufficient fuel just wanting to believe their own construction of reality. The same is true of a pilot descending into terrain.
Risk assessment. It is what we do and pilots are quite good at it but we need to constantly train our brains to ensure that we are making decisions based on actual evidence and not what we believe to be true as a consequence of bias and perceptual limitations.
I am honored to say that the story has now been published and you can click on the image below to view it and watch the video. The URL is also listed below.