The Sensor: Legal Insights into Autonomous Vehicles

PERSPECTIVE

Cleared for Take-Off — Autonomous Technology and Aviation Litigation

Development of Automation in Aviation and Public Perception

Autonomous technology in aviation is not new. The Hewitt-Sperry Automatic Airplane first flew in 1917 and was designed as a pilotless aircraft to deliver explosives during World War I. In 1933 Winnie Mae circumnavigated the globe in seven days with her pilot assisted by inaugural autopilot technology. From those early flights the aviation industry has propelled itself further and faster in that traditional auto flight systems, including autopilot and autothrottle, are the norm and cockpit automation is now a realistic discussion ranging from completely autonomous UAV systems to pilotless commercial aircraft. The developing “deep learning” or “machine learning” technology may not be holding the industry back, but the public’s perception of aviation automation and discomfort levels with robotic aircraft may be – factors to consider if faced with litigating an autonomous aviation case before a jury. Arguably there is legitimacy to the public’s concern about rapid advancement in aviation automation. The interface between technology, human factors, and meteorology – all important components in flying – lends itself to a dynamic matrix that is rife with risk. Autonomous technology only increases the perceived uncertainty as how to assess that risk.

An extreme example of how aviation risk and automation intersect occurred one morning in heavy rain when a twin-engine aircraft steadily descended towards the ground until it impacted a farmer’s field, exploded, and killed all 21 people on board. Rain and turbulence were factors, but not necessarily the primary causes of the accident. The aircraft, a C-23B+ Sherpa, was outfitted with autopilot technology and its alleged failure contributing to the crash was the subject of contentious litigation against the designer and manufacturer of the autopilot system (Ferguson v. Bombardier Service Corp., 244 F. App’x 944 (11th Cir. 2007). Another example of aviation risk and automation, albeit an older one, was when an autopilot system was alleged to malfunction during flight leading to quick pitch adjustments and passenger injuries (Nelson v. American Airlines, Inc. 70 Cal. Rptr. 33 (Cal. Ct. App. 1968). However, the passengers injured onboard this American Airlines flight were lucky, because about one year later on March 1, 1962, American Airlines 707-123B Astrojet Flight One crashed into Jamaica Bay Queens two minutes after takeoff due to a problem with the autopilot rudder servo. As a result of the failure of that autonomous technology, 95 lives were lost.

However, the autopilot accidents described above occurred well before the advent of new processing power and technology that is available today. AI in aviation may soon eclipse software development, and this change has opened the possibility for flight control machine learning based on video feeds, GPS, sonar, and gyros. This machine learning, in turn, lends itself to autonomously controlled altitude, attitude, heading, engine performance (and many other aspects of flight) with minimal to no pilot input. Flying autonomous taxis and electric vertical take-off and landing (VTOL) vehicles are no longer vague concepts as a result of the innovative steps taken by pioneering manufacturers and operators. As well, newly available tracking algorithms and image recognition algorithms can adapt in real-time allowing for UAVs and drones to fly and hold on target even if winds aloft impact their position or obstacles below obstruct their view. Just as humans adapt their decision-making in response to learning, new AI systems in aircraft and UAVs can learn and grow within their host mode of transportation to the point that the system can operate without pilot input at all. Parallel to the exciting adoption and development of aviation AI is the increasing sophistication of natural language processing allowing for more continuous communication between human and system whether the human is on board or on the ground providing remote control.

Auto Flight Technology Failures & Traditional Product Liability Litigation

Although the technology development is new and impressive, the older auto flight systems for manned aircraft were designed in such a way that if the autopilot or autothrottle system failed, a pilot would not be prevented from manually shutting it down. But if the system is completely unmanned, or deep learning AI interferes with the traditional override ability, what happens next?  Lessons from past autopilot litigation, particularly in the United States, can provide valuable guidance for those introducing new aviation autonomous technology as to how courts could assess culpability if something goes terribly wrong.

In Ferguson v. Bombardier Services Corp., described briefly above, representatives of the deceased passengers on board the C-23 Sherpa that crashed in a field one rainy morning brought a claim against the autopilot designer and manufacturer for its purported role in the fatal accident. The plaintiffs believed that, among other contributing factors, the autopilot system improperly went into “torque limiting mode” (restricting the effectiveness of any pilot input) and there was no annunciator installed to warn the pilot when torque limiting mode started. There was also the allegation that the autopilot system was incorrectly installed and this led to a cable jam that prevented recovery once the aircraft’s dive towards the ground began. However, the plaintiffs’ expert testimony about autopilot defects was excluded at trial. As well, the evidence before the court suggested that primary causal factors for the aircraft’s loss of control included the pilot’s decision to leave the cockpit and go to the bathroom, which shifted the weight of an aircraft that was already improperly loaded outside of its Centre of Gravity, and at a time when the aircraft was flying in turbulence and wind shear.

In Nelson v. American Airlines, Inc., also described above, plaintiffs pursued damages against American Airlines when a passenger was thrown about following a sudden and unexpected movement when the autopilot overcompensated causing the aircraft to nose down rather than stay level. Although the autopilot was disengaged and manual control resumed after the error, the sudden pitch change was linked to the horizontal stabilizer trim and therefore the passengers in the rear of the aircraft experienced a more severe porpoise motion.  The aircraft logbooks indicated that there were altitude control issues identified the previous day; and although not serious, the cause of that problem was not known. As a result, a component part of the autopilot was replaced and the equipment tested as a precaution, but there was no flight test in between the equipment replacement and the subject flight. At trial the airline was found not liable for the autopilot malfunctioning but that decision was overturned on appeal. The appeal court determined that there were possible errors made in the installation of a replacement autopilot component part and earlier routine maintenance on the autopilot was either incomplete or improper. Of note, the failure to conduct a test flight in these circumstances was not considered negligent based on the governing regulatory requirements at the time. 

Both of these court cases focused on allegations of negligent design, manufacture, installation and maintenance of the auto flight systems. These should remain live issues for component part manufacturers, suppliers and integrators of autonomous aviation technology for both manned and unmanned flight. Aircraft maintenance records and logbooks will continue to be scrutinized as will any autonomous developer’s foresight into how their technology will react to human input and how override systems are incorporated. But as aviation is multi-facetted, even if the design or manufacture of autonomous technology could be construed as a contributing factor to an accident, elements such as human factors, weather, and weight and balance will remain important considerations for the court.

Additionally, manufacturers could face court criticism if they fail to provide adequate guidance on the maintenance of their autonomous technology, or if they fail in their duty to warn of its inherent risks.

“Duty to Warn” and “Duty to Train” on Autonomous Technology

A heightened concern for those developing and adopting autonomous aviation technology may be in their common law “duty to warn” on the risks of autonomous technology. Recent investigations of aircraft accidents involving autonomous technology and court cases so far have been focused on whether there is a subset “duty to train” pilots and end users. 

Courts can determine that a product is defective if a manufacturer fails to include appropriate warnings and instructions for its safe use, maintenance, or upkeep. Aligned with a manufacturer’s duty to warn is the doctrine of educational malpractice, which are claims founded on an unreasonable or poor quality of education leading to a loss. “Duty to Train” as a separate common-law duty has not yet received widespread acceptance. However, the “Duty to Train” concept may be accepted and broadened by our court system with the development of AI and complex autonomous technology in aviation given the end-purposes of both training and autonomous technology are aligned: promoting humans and machines to act independently.

One recent product liability case where the training of pilot technology interface was challenged involved a crash of a Cirrus SR22 aircraft in Glorvigen v. Cirrus Design Corporation 816, N.W.2d 572, 583 (Minn. 2012). This fatal aircraft accident involved a four-seater single-engine private aircraft flying in marginal VFR weather conditions. A post-crash investigation showed no aircraft or engine problems. Pilot error and spatial disorientation were significant factors, but the plaintiffs also pursued the manufacturer of the SR22 for its alleged failure to fulfill its duty to warn by failing to provide adequate training on the use of the autopilot (despite the regulations not requiring the manufacturer to offer this type of training). The theory was that if the manufacturer had provided sufficient training for new SR22 owners, including how the autopilot can assist in getting out of poor weather conditions, then the spatial disorientation and accident would not have occurred. The manufacturer agreed that it did have a duty to warn of dangers associated with its SR22, but successfully argued at appeal that its duty did not extend to training pilots to proficiently fly the aircraft.  Of note, educational-malpractice claims are barred in the State where the case was brought. In other jurisdictions, a claim involving a manufacturer’s duty to warn could successfully include allegations of improper training on autonomous technology if an aircraft accident is preceded by problems involving the pilot and technology interface.

Perhaps a more globally well-known allegation of a failure to train follows the July 2013 failed landing of Asiana Airlines Flight 214 at San Francisco International Airport. That morning in San Francisco, as a result of an improper descent, a Boeing 777-200ER collided with the runway resulting in many injuries and three fatalities. In addition to claims against the airline, claims were commenced against Boeing on the basis that they failed to properly train the pilots on the auto flight systems of the Boeing 777. One of the primary concerns for this accident was that the attempted landing of Flight 214 was conducted by a pilot who had flown a limited amount of training flights on the subject aircraft, the supervising pilot was observing his first flight, and the cockpit set up may have added to the confusion as to how to use and/or interpret the auto flight system during descent.  This accident litigation has not yet resulted in a court decision promoting the notion that there is a positive “duty to train” on autonomous technology.

The Current Forecast on Autonomous Technology Litigation in Aviation’s Future

Claims against manufacturers for improper design, manufacture, and installation are expected; albeit, such aviation claims could be more complex and cumbersome to litigate given the sophistication of their design. With the increasing development and adoption of autonomous technology in aircraft systems, governing regulators may face pressure to ensure that operators are properly trained to monitor, diagnose and maintain the new systems. When the autonomous technology requires human interface, there will still be the expectation that pilots are to keep their manual flying skills up to standard should the auto flight systems fail or be misinterpreted. Further the governing regulations requiring pilots to keep a proper and constant lookout despite reliance on autonomous technology will likely stand. With increasing sophistication of autonomous technology still demanding apt flying skills and appropriate pilot vigilance thus heightening the intricacies of human interaction, manufacturers should expect significant scrutiny in how they discharge their duty to warn of the inherent dangers of their technology and whether they appropriately trained the end user on the use of their autonomous aviation technology.

  • By: Katherine Ayre

Table of contents