Did AI or Poor Design Nearly Crash Lou’s Airliner?

With use cases that go well beyond virtual assistants, artificial intelligence (AI) is no longer the next big thing—it is the current big thing. Believe it or not, amongst the high-profile product announcements at CES 2024—the Consumer Technology Association’s annual trade show in Las Vegas—was an AI-powered air fryer that automatically detects what type of food it is preparing for human consumption. Users just press start, according to Dean Khormaei, CEO of Chef AI, who told journalists covering the trade show that his company’s product turns even the worst cooks into chefs.

In the past, the hype around new technologies has sizzled after promising revolutionary advances in kitchen appliances. For example, before the Internet toaster became an industry joke, a prototype used to promote the Internet of Things years ago generated headlines that had consumers thinking they would soon be living free of household chores like on TV’s The Jetsons.

But AI isn’t new. It has been around for a long time, and not just as a concept. As Forbes contributor Eli Amdur noted late last year, “If Alan Turing played around with it in the 1940s, Arthur C. Clarke and Isaac Asimov wrote about it in the 1950s, Stanley Kubrick made a film about it in the 1960s, and IBM introduced the first commercial application of Watson in 2013, then AI is nothing new.”

And while AI may not eliminate the need for human chefs anytime soon, investment in AI ventures has increased significantly in recent years thanks to other use cases that have the potential to offer significant ROI. In 2012, according to an Organization for Economic Cooperation and Development (OECD) report highlighted by Amdur, the technology attracted less than 5 per cent of global VC investments. By 2020, it accounted for a 21 per cent share worth an estimated US$75 billion.

Most of this investment funded ventures in the United States and China, where the main focus was on advancing the development of driverless vehicles and other mobility solutions, followed by use cases related to health care, pharmaceuticals, and biotechnology. This, of course, was before commercial and consumer interest in AI-powered virtual assistants exploded in 2023, largely thanks to the popularity of OpenAI’s ChatGPT.

As interest and investment in AI has soared, however, concerns over its deployment have also increased. There are big-picture worries about the so-called Singularity, which ChatGPT calls “a hypothetical point in the future when artificial intelligence (AI) surpasses human intelligence, leading to rapid and exponential advancements in technology.”

Interestingly, ChatGPT fails to mention that the Singularity is also a hypothetical point in the future when Terminator-type scenarios threaten the human race. But regardless of what happens down the road, AI applications are already being trusted to do things that present risks much greater than producing documents with factual errors or burning an air-fried meal.

Simply put, while AI can make our lives easier, its adoption comes with an unknown price, especially when we deploy the technology to automate mission-critical systems. After all, when we partially remove humans from the equation, we run the risk of having less control when things inevitably go wrong.

This article recreates a conversation that originally took place between three academics interested in AI. Early last year, Derrick Neufeld, an information systems professor at Western University’s Ivey Business School, invited Ivan Belik, a management science scholar at the Norwegian School of Economics, and Louis Bielmann, a former airline captain who is now a finance professor at Trinity Western University’s School of Business, to meet for a discussion on the limitations and risks associated with automation.

The discussion had an aviation backdrop, opening with Neufeld inviting Bielmann to describe an incident that occurred years ago when he was a senior first officer with a major European airline. Today, the incident in question—which involved a takeoff that almost ended in disaster—serves as a cautionary tale that vividly illustrates how automated systems, including AI systems, can be a liability rather than an asset in situations unforeseen by the designers.

The Incident

Derrick Neufeld: Lou, thank you for your willingness to meet and have a conversation about AI, systems design, and your experience as an airline pilot. Previously you asked me some questions about the future of artificial intelligence, so I’ve invited a colleague and expert in AI, Dr. Ivan Belik from the Norwegian School of Economics, to join our conversation. Would you mind recounting the events leading up to that serious flight incident?

Lou Bielmann: Thanks, Derrick, I’m excited to hear what you guys have to say about systems design and AI. The event in question took place in the mid-1990s. Our flight was cleared for takeoff from Tokyo’s Narita International Airport and was destined for Zurich, Switzerland. With fuel tanks filled for this long-range flight, and the passenger cabin at near-full capacity, we were taking off at the maximum allowable takeoff weight. It was my leg to fly, while the captain acted as assisting pilot. Our aircraft—a big, three-engine plane—was equipped with auto-flight and flight management systems that were state-of-the-art at the time.

During takeoff, pilots are very attentive to a critical speed called V1, above which a takeoff can no longer be discontinued, as there would not be enough runway length remaining to stop. I don’t recall what our exact V1 speed was for the conditions on that day, but it would have been high—something on the order of 145–155 knots (167–178 mph). With our thrust levers pushed forward for takeoff, the engines spooled up and the plane began accelerating. Everything was normal as we picked up speed, up until the moment the captain called out “V1.” At that instant, the engine fire warning for engine 3 (right wing) started blaring loudly, indicating engine 3 was on fire. The airplane began to shudder violently and yawed sharply to the right, which I had to correct with a strong push on the left rudder pedal. We later learned that there was a long flame, 40 to 50 feet long, trailing from the engine. The worst possible moment for a serious failure is just after accelerating past V1, because it is too late to abort the takeoff but there is still a lot of acceleration that must take place before the airplane can lift off the runway. Our acceleration now had to proceed with one engine producing no thrust, resulting in a sluggish takeoff run that took nearly the full length of the runway to get airborne, followed by a shallow flight path during climb-out.

At this point, I’ll mention that the aircraft’s automated thrust system goes into T/O CLMP (takeoff-clamp) mode on takeoff, which is designed to lock the thrust setting at maximum takeoff power until an altitude of 1,500 feet is reached. Below that altitude, it is meant to be technically and mechanically impossible for the auto-thrust system to command lower thrust. This feature is meant to moderate pilot workload during the critical early phase of flight, but it would soon be the source of a nearly disastrous problem for us.

When we finally achieved VR (rotation speed), I gently pulled back on the yoke, the nose of the airplane lifted, and we were airborne. It was a relief to be off the ground, but our shallow climb angle and the still-burning engine under the right wing were unpleasant, to say the least. As the pilot flying (PF), my attention was 100 per cent focused on flying the airplane. To accurately maintain the correct speed and compass heading required that I scan between the outside world and the flight instruments directly in front of me. The standard procedure for engine failure on takeoff was for the PF to verify a positive rate of climb, and then to call for the pilot not flying (PNF) to do the following: “gear up . . . check takeoff thrust . . . confirm engine-out.” These three items were references to retracting the landing gear, ensuring that takeoff thrust was indeed set to T/O CLMP mode (as should automatically be the case), and confirming to the flight management system that there was an engine-out via a simple button push on the centre pedestal.

The PNF confirmed all three items to me as I called them out, yet there was something dreadfully wrong. The airplane was not climbing. We gained a little altitude initially—maybe two or three hundred feet above ground level—but now we were not achieving even a shallow climb-out. I could feel by the “seat of my pants,” and see on my vertical speed indicator, that we were level and about to start descending. As there was precious little space between us and the obstacles beneath the takeoff path, I had no choice but to trade airspeed for preventing a descent. This trade-off of speed for maintaining altitude put us 7 knots below what should have been our minimum speed. We were now in an uncomfortable balance between ground contact and aerodynamic stall. I was perplexed: having carried out this procedure many times in the simulator, both as a trainee and as a simulator instructor, we had done everything “by the book.” We should have been climbing.

The only thing I could think to do was to run the procedure once more. I called out, “Again, gear up . . . check takeoff thrust . . . confirm engine-out.” This time, when the PNF checked the thrust settings, he found that the thrust on the two good engines was below the takeoff setting. He immediately manually increased the thrust to takeoff power and the airplane, finally, resumed a shallow climb. Even though the PNF had confirmed takeoff thrust the first time, and even though the T/O CLMP should have physically prevented any lower thrust settings, the thrust settings had mysteriously dropped to below the takeoff setting. Once we had climbed through 1,500 feet above ground level, I lowered the nose of the airplane and accelerated to a high enough speed to retract the flaps and slats, after which, according to procedure, I could finally call for the engine fire checklist. At that point, the engine was properly shut down and the fire extinguished. Our next tasks were to ask for a holding pattern and clearance to dump 60 tonnes of fuel to get our weight below the maximum allowable landing weight. Lastly, we needed to prepare for our one-engine-out approach and landing back at Narita. The entire flight took approximately 30 minutes.

 “Is it even possible to design systems that can handle as-of-yet unknown situations? And where does AI fit in?”

Ivan Belik: What a crazy experience! Did you ever find an explanation for the failure of that T/O CLMP system?

Lou: The auto-thrust manufacturer reported that their system had indeed reduced the thrust on the two good engines, yet it was supposed to be impossible for such a thing to occur below 1,500 feet. While I was relieved that the investigation had found a technical system failure as the cause of the incident, and not “pilot error,” I never did receive a satisfactory explanation for the failure. I’m curious: from your perspectives as systems design and AI experts, what could have led to such a dangerous situation?  

Possibility #1: Poor Systems Design 

Derrick: Here’s one thought from my domain. When designing information systems, two general design approaches have historically been used. The so-called “waterfall” method is an engineering-centric mode that relies on pre-specified stages that are rigorously followed one after the other. In contrast, the “agile” approach involves rapid iteration and delivery of user-centric working versions of a product with constant goal revisioning along the way. Agile techniques have influenced and borrowed from iterative, non-linear “design thinking” methodologies that have become very popular in recent years. There are pros and cons to different design approaches, but I was reminded by your story that airplane design is sometimes used as an analogy to support the need for waterfall designs—since not many people are willing to fly in early-iteration “design thinking” airplanes! While we don’t know the design processes used for the auto-thrust system, it’s probably safe to assume that some form of traditional waterfall approach was employed. A key drawback of this approach is that if something is missed during the design stage due to insufficient feedback from users, a system sometimes reaches production before critical weaknesses are identified. There are many information systems, failed and in-use, that suffer from this kind of design weakness.

Lou: I’d agree that the final product delivered is not always what pilots would prefer—especially when the manufacturer makes choices driven too heavily by economic considerations. But your description of design approaches raises another question. The systems developed over the past four decades, especially since the advent of the glass cockpit, are very helpful to pilots most of the time. But when abnormal or emergency situations arise, certain systems can quickly degrade to a level where they become a serious liability. I’m not saying that those developments have not contributed immensely to safety—only that most pilots would prefer a systems design philosophy that prioritizes helpfulness during the most difficult situations as opposed to a system that generates 20 or 30 aural and visual alerts and failure messages in those situations, but does little to help pilots understand the relationships between the failures and their root cause. I also wonder: Is it even possible to design systems that can handle as-of-yet unknown situations? And where does AI fit in?

Possibility #2: Insufficiency of AI 

Derrick: That’s a great question, Lou. I think a lot of people assume AI has magical problem-solving capabilities, but that may not be the case. Stanford Professor John McCarthy, who coined the term “artificial intelligence,” defined it quite simply as “the science and engineering of making intelligent machines, especially intelligent computer programs.” McCarthy’s definition mentions intelligence, but nothing about magical abilities or about “Skynet” taking over the world!

Ivan: It’s unfortunate that vague definitions and hype surrounding AI have proliferated. People seem to cope with complex technologies by black-boxing and treating them as incomprehensible—as futurist Arthur C. Clarke noted, “Any sufficiently advanced technology is indistinguishable from magic.” An aircraft’s auto-thrust system does fit the general definition of AI in that it handles certain decision-making aspects of the flying process to assist the pilot, and thereby mimics intelligence. But the specific task it is assisting is routine: to increase, decrease, or maintain thrust levels based on a set of known variables and clearly defined rules. This is exactly where AI shines today—in automating repetitive and time-consuming tasks. It does this using computer algorithms, analytical methods, predictive analytics, and decision-making models in ways that imitate, and in certain special cases, outperform humans.

 Has AI Been Oversold? 

Lou: So, AI is not as advanced as people assume?

Ivan: AI techniques are fundamentally dependent on traditional, well-established probabilistic and optimization approaches that are effective for solving well-understood, structured, data-driven problems. Once a problem is discovered and formalized in a clear, computable way, developing an AI solution becomes feasible. This dovetails with Derrick’s comment about “waterfall” design. However, teaching computers to solve poorly understood, unstructured, undiscovered, unanticipated problems that do not yet have a clear formulation is an incredibly challenging task.

Austrian robotics expert Hans Moravec made a fascinating observation when noting: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Solving complex but well-structured problems such as developing intelligence tests or playing chess or setting auto-thrust—under normal conditions—is easy for AI, since these problems can be solved with traditional probabilistic and optimization methods. But developing AI solutions for undiscovered problems, such as responding to novel jetliner emergencies, is a very different kind of challenge, and extremely difficult for machines to do as well as humans.

Derrick: Lou, earlier you talked about systems that prioritize helpfulness in difficult and complex situations. Can you expand on that?

Lou: The tragic case of Air France 447 is a fitting example. This was an Airbus A330 that crashed into the Atlantic Ocean en route from Rio de Janeiro to Paris in 2009. The official accident report stated that a “temporary inconsistency between the measured airspeeds, likely following the obstruction of the pitot probes by ice crystals, led in particular to autopilot disconnection.” The incident began with the failure of simple but critical sensors, which then caused the autopilot and auto-thrust systems to disconnect, and the PF to fly the airplane manually, including the adjustment of thrust settings as required. Through a series of erroneous control inputs, the PF soon put the airplane into an aerodynamic stall. According to the accident report, one aspect of the succession of events that led to the accident was “the crew’s failure to diagnose the stall situation and, consequently, the lack of any actions that would have made recovery possible.” The report further added that “the startle effect played a major role in the destabilisation of the flight path and in the two pilots understanding the situation.”

Pilot error certainly played a major role in this accident, but there were other complications. Besides having to suddenly take over from automation that had reached the limit of its capabilities due to the sheer number of failures that followed from the loss of the pitot probes, the pilots were inundated with visual and aural warnings. The single most important of those warnings was the aural “STALL, STALL” warning that sounded time after time for four minutes before the crash. To quote from the accident report, “In an aural environment that was already saturated by the C-chord warning, the possibility that the crew did not identify the stall warning cannot be ruled out.” Portions of the cockpit voice recording reveal that the pilots were overwhelmed and confused:

“We still have the engines! What the hell is happening? I don’t understand what’s happening.” 

“Damn it, I don’t have control of the plane, I don’t have control of the plane at all!” 

Then the following exchanged occurred once the captain—who had been out of the cockpit for his rest time in the crew bunk—returned to the cockpit:

“What the hell are you doing?” 

“We’ve totally lost control of the plane. We don’t understand at all. We’ve tried everything.” 

“What do you think? What do you think? What should we do?” 

“Well, I don’t know!”

Apparently the three pilots did not realize they were in a stall, and none of them mentioned the stall warning.

AF 447 is unfortunate evidence that the “ironies of automation” described by Lisanne Bainbridge 40 years ago remain unresolved. Bainbridge identified that when a human was relied upon to take over when an automated system was unable to complete a task, the human operator would have a more difficult task than if the automation did not exist in the first place. The automation would make the human operator less proficient, less “in the loop,” and challenged by the shift from monitoring to operating. As she put it, “By taking away the easy parts of the task, automation can make the difficult parts of the human operator’s task more difficult.”

So, I wonder: Can AI contribute to resolving these kinds of ironies of automation?

Derrick: Given what Ivan says about limitations in solving undiscovered or unanticipated problems, inserting “replacement” AI elements into the piloting equation sounds dangerous. A much less risky and more attractive vision for the future of AI in aviation might be to focus on assisting rather than replacing human decision-making—especially when critical sensors fail or when conflicting data is presented.

Ivan: Conventional AI will continue to provide effective solutions for overcoming many challenges, but soon AI in aviation and other areas is likely to be based more on the development of collaborative intelligence. We are talking about the merging of AI and human forces for augmenting each other’s abilities in problem solving.

Collaborative Intelligence 

Lou: The way you describe it, I can imagine AI continuing to make significant contributions—not by “taking over” complex flight operations but by keeping users (pilots) even more engaged and in the loop. The terms you used, such as “collaborative intelligence” and “augmenting abilities,” strike me as the optimal approach.

Ivan: I hesitate to say “optimal,” but I’d agree that we need to challenge the hype, while continuously exploring the benefits and possibilities that AI offers. Learning is a central feature of intelligence, human or artificial. 

The airline industry became the first ultra-safe industry by diligently learning from its failures—flying on a commercial airliner today is 100 times safer than travelling by car. This is because the industry latches on to every opportunity to improve safety, even when it might lead to only a small step forward in safety.” 

Derrick: It seems to me that learning from failure is something that the airline industry has done very successfully. Industry statistics show more than 40 million flights globally in 2020 with steady year-over-year growth, while fatalities relative to distance travelled have been extremely low and trending downward. Is that an accurate characterization, Lou?

Lou: It’s very true. The airline industry became the first ultra-safe industry by diligently learning from its failures—flying on a commercial airliner today is 100 times safer than travelling by car. This is because the industry latches on to every opportunity to improve safety, even when it might lead to only a small step forward in safety. But mistakes and, sadly, tragedies do still occur.

What I’m taking from our discussion is that a human-centred design philosophy, careful application of AI, and a dogged determination to continuous learning from every failure, or near-failure, will allow us to progress beyond our wildest dreams. It will be fascinating to see how AI plays out in the coming years.

References 

  • Bureau d’Enquêtes et d’Analyses, Final Report on the Accident on 1st June 2009 . . ., July 27, 2012, https://bea.aero/docspa/2009/f-cp090601.en/pdf/f-cp090601.en.pdf.
  • Lisanne Bainbridge, “Ironies of Automation,” Automatica 19, no. 6 (1983): 775‒779, https://doi.org/10.1016/0005-1098(83)90046-8.
  • Arthur C. Clarke, Profiles of the Future (London: Hachette UK, 2013).
  • Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence,” California ManagementReview 61, no. 4 (2019), https://doi.org/10.1177/0008125619864925.
  • McCarthy, What Is Artificial Intelligence? (2007), http://faculty.otterbein.edu/dstucki/inst4200/whatisai.pdf.
  • “Materializing Artificial Intelligence,” Nature Machine Intelligence 2, no. 653 (2020), https://www.nature.com/articles/s42256-020-00262-2.
  • “Number of Flights Performed by the Global Airline Industry from 2004 to 2022,” Statista, 2022, accessed December 15, 2023, https://www.statista.com/statistics/564769/airline-industry-number-of-flights.
  • “Number of Worldwide Air Traffic Fatalities from 2006 to 2021,” Statista, 2022, accessed December 15, 2023, https://www.statista.com/statistics/263443/worldwide-air-traffic-fatalities.
  • Ian Savage, The Economics of Transportation Safety (Northwestern University Transportation Center, 2013), https://faculty.wcas.northwestern.edu/ipsavage/MosesLecture.pdf.
  • Leigh Thompson and David Schonthal, “The Social Psychology of Design Thinking,” California ManagementReview 62, no. 2 (2020), https://journals.sagepub.com/doi/10.1177/0008125619897636.
  • James Wilson and Paul R. Daugherty, “Collaborative Intelligence: Humans and AI Are Joining Forces,” Harvard BusinessReview, July‒August 2018, https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces.

About the Author

Louis Bielmann is an Assistant Professor of Finance at Trinity Western University’s School of Business. He is currently researching the state of retirement funding in the United States, and….Read Louis Bielmann's full bio

About the Author

Derrick Neufeld is an Associate Professor of Information Systems at the Ivey Business School. His research explores the effects and effectiveness of technology mediation on individuals and organizations.….
Read Derrick Neufeld's full bio

About the Author

Ivan Belik is an Associate Professor at the Norwegian School of Economics and is affiliated with the research centres STOP and DIG. Belik’s research is focused on AI, business intelligence,….
Read Ivan Belik's full bio

Leave a Reply

Your email address will not be published. Required fields are marked *