Ⅰ. Introduction
Over the past several decades, advancements in aviation automation have significantly enhanced the safety and precision of aircraft operations. Systems such as the auto-flight system, auto throttle (A/T), flight management system (FMS), and flight-envelope protection have strengthened operational efficiency and safety while substantially reducing pilot workload. However, major accident reports consistently point out that although automation reduces human error, it simultaneously introduces new forms of risk—referred to as automation vulnerability. Representative accidents such as Air France Flight 447, Asiana Airlines Flight 214, China Airlines Flight 140, and Aeroflot Flight 593 collectively illustrate that automation failures, mode confusion, over reliance on automation, and degradation of manual-flying skills functioned as common contributing factors. Although each accident had its own technical elements, the pilots’ insufficient understanding of automation, monitoring failures, and delayed recognition of abnormal conditions ultimately led to catastrophic outcomes (BEA, 2012; Flight Safety Foundation, 2013; Kim, 2023).
In particular, when pilots do not clearly understand the operating modes and limitations of automation during abnormal situations, human-factor issues—such as weakened manual-flying proficiency, failures in energy management, deficiencies in crew resource management (CRM), and loss of situational awareness (SA) can emerge in a compounded manner, potentially resulting in an accident (Kim, 2023).
Although automation systems are designed to reduce pilots’ judgment and control workload, the growing complexity of terminology, functions, and inter-mode interactions has paradoxically increased cognitive burden of pilots. In automation malfunctions or aircraft upset conditions, pilots repeatedly fail to maintain sufficient situational awareness, misinterpret system logic, and respond too late.
Those issues prompted aviation authorities to overhaul pilot-training systems. Since 2014, the International Civil Aviation Organization (ICAO) has recommended a competency-based training (CBT) framework that includes upset prevention and recovery training (UPRT), automation-mode monitoring, and enhanced manual-flying training. The Federal Aviation Administration (FAA) and the European Union Aviation Safety Agency (EASA) actively adopted those recommendations (ICAO, 2014; FAA, 2020).
This study conducts a comprehensive analysis of four representative automation-related accidents to:
1.1.1 Identify common causes and patterns in major automation-related aircraft accidents;
1.1.2 Examine how insufficient mode awareness, reduced situational awareness, and weakened manual-flying capability contributed to accident progression; and
1.1.3 Propose improvement strategies for future pilot-training systems, aircraft-automation design, and organizational safety management.
On July 6, 2013, Asiana Airlines Flight 214, a Boeing 777-200ER, entered a low-energy flight (LEF) state during a visual approach to San Francisco International Airport (SFO) and subsequently struck the seawall. The NTSB identified “inadequate pilot understanding of automation modes and failure to monitor airspeed” as the primary causal factors (NTSB, 2014).
Under specific conditions, the Boeing 777 auto-throttle transitions into HOLD mode, during which it no longer controls airspeed. According to the NTSB report, the pilots were unaware that the auto-throttle system was not actively maintaining speed (NTSB, 2014).
Minor changes in the Flight Mode Annunciator (FMA) have long been criticized as a limitation of automation design. Although the mode transition was displayed on the FMA, the pilots failed to monitor it (NTSB, 2014).
The NTSB found that the pilots had an inadequate understanding of the aircraft’s auto-flight system. Increased dependence on automation contributed to degradation in manual-flying and energy-management skills.
The monitoring pilot (PM) failed to detect the decreasing airspeed, which directly contributed to the accident. Deficiencies in monitoring, call outs, and CRM—combined with limited experience managing automation during visual approaches following the transition to the B777 were noted as contributing factors (NTSB, 2014).
Unlike Airbus aircraft, the Boeing 777 lacked a dedicated low energy warning system, which the NTSB recommended as a safety enhancement (NTSB, 2014).
Insufficient understanding of automation modes can rapidly exacerbate abnormal situations. During visual approaches, over reliance on automation may increase risks rather than reduce it. Manual-flying skills, energy-management proficiency, and vigilant mode-monitoring capabilities are therefore even more critical in the era of advanced cockpit automation.
On April 26, 1994, China Airlines Flight 140, Airbus A300-600R, crashed during its approach to Nagoya airport after an inadvertent activation of TO/GA mode and conflicting autopilot re-engagement logic caused the aircraft to enter a full stall (JTSB, 1996).
1.3.2.1 Erroneous Activation of the TO/GA Switch. The first officer unintentionally activated the Go-Around (GA) mode.
1.3.2.2 Persistence of GA Pitch-Up Logic after Autopilot re-engagement. Although GA mode appeared to be disengaged, the pitch-up logic associated with goaround operation remained active internally.
1.3.2.3 Conflict Between Pilot Inputs and Autotrim. Pilot control inputs conflicted with automatic trim commands, progressively worsening the stabilizer mis trim condition.
1.3.2.4 Thrust Increase Triggered by Alpha-Floor Function. Activation of the alpha-floor protection commanded maximum thrust, further increasing nose-up pitch and leading the aircraft into a stall.
Although GA mode appeared to be canceled, the internal pitch-up logic persisted, revealing a structural deficiency in mode-transition design. The absence of warnings for trimmable horizontal stabilizer (THS) mis-trim was also identified by the JTSB as a structural problem (JTSB, 1996).
The crew demonstrated insufficient modeawareness capability and inadequate training for unexpected auto-trim behavior. They lacked understanding that auto-trim could continue to accumulate even when manual override was applied. In addition, training on preventing unintended autopilot engagement was found to be insufficient (JTSB, 1996).
The 1994 crash of Aeroflot Flight 593, an Airbus A310, resulted from a combination of cockpit-discipline violations, failure to recognize partial autopilot disengagement, and inability to recover from an abnormal attitude (IAC, 1994).
1.4.2.1 Violation of Cockpit Discipline
An unqualified child manipulated the controls, causing an aileron-channel disengagement and subsequent autopilot disconnection in the roll channel (IAC, 1994).
1.4.2.2 Failure to Recognize Partial Disconnect. The flight crew did not detect the partial disengagement of the autopilot (IAC, 1994).
1.4.2.3 Failure to Recover from an Abnormal Attitude. After entering a spiral dive, the crew reacted late and applied excessive inputs, ultimately failing to recover the aircraft. This accident clearly demonstrates the necessity of UPRT (IAC, 1994).
In 1972, Lockheed L-1011 TriStar aircraft crashed after the entire flight crew became preoccupied with a landing-gear indicator problem, failed to notice a partial autopilot disengagement, and did not detect the aircraft’s gradual descent (NTSB, 1973).
1.5.2.1 Loss of Situational Awareness. The flight crew became excessively fixated on the malfunctioning indicator light and consequently failed to maintain situational awareness (NTSB, 1973).
1.5.2.2 Failure to Recognize Transition into Autopilot CWS (Control Wheel Steering) Mode. When ALT HOLD disengaged, moving the control yoke allowed the pilot to set a desired pitch or heading; releasing the controls caused the aircraft to maintain that attitude under CWS mode. The crew failed to recognize that the aircraft had transitioned into CWS mode and was gradually descending (NTSB, 1973).
Ⅱ. Comprehensive Comparison and Analysis
A comparative review of the four major automation-related accidents revealed five recurring patterns, as summarized in Table 1.
In all four accidents, pilots failed to recognize changes in automation modes (FMA), partial disconnection of the autopilot/auto throttle (AP/AT), or mode conflicts in a timely manner.
This was largely attributed to opaque mode-transition logic, insufficient warnings and displays, and aircraft structural limitations in mode transparency transition ??. Consequently, pilots often continued flight without accurately understanding which mode the aircraft was operating in or which functions were active or disengaged. Such mis perceptions led to failures in maintaining airspeed and altitude, as well as incorrect assessments of aircraft energy state.
As pilots became increasingly accustomed to automation oriented flight operations, essential manual-flying skills deteriorated—particularly skills related to stall recovery, low-speed/high-AOA recognition, high-altitude upset handling, and energy management using thrust–pitch–trim coordination.
Insufficient or absent UPRT training directly contributed to delayed stall recognition and improper execution of recovery procedures.
A continuous breakdown in situational awareness was a common factor across all accidents.
The following patterns were repeatedly observed : Failure to detect changes in speed — either deceleration or acceleration — during what appeared to be normal approaches. Altitude - monitoring failures or misinterpretations. Misjudgments of actual aircraft energy, trim, and pitch states caused by automation’s “hidden states”. Monitoring lapses related to increased workload, including menu navigation menu selection and button manipulation.
These findings clearly demonstrate reduced vigilance not only in monitoring automation states but also in monitoring the aircraft’s actual flight condition—an essential pilot responsibility.
Common CRM breakdowns were observed in all four accidents: Inadequate sharing of abnormal cues between captain and first officer. Failure to cross-check critical indicators such as decreasing speed or abnormal trim. Delayed communication. Hesitancy to intervene due to authority gradients.
As a result, the crew experienced confusion about “who should do what” during automation malfunctions, and team-level defensive layers failed to function effectively.
Several persistent design limitations contributed to accident progression: FMA information was not displayed in an intuitive manner, delaying pilots’ recognition of mode changes. Mode-transition logic lacked clarity, making automation behavior difficult for pilots to predict. Insufficient warnings for partial autopilot disconnects allowed pilots to unknowingly continue flight without fully functioning automation. Absence of mis-trim warnings allowed recoverable conditions to deteriorate into unrecoverable ones.
Across all accidents, the same training deficiencies repeatedly emerged: Insufficient UPRT, resulting in failures to recognize or recover from stalls and upsets. Inadequate training on mode awareness and prediction of automation behavior. Reduced ability to intervene manually during unexpected automation errors or mode transitions. Lack of systematic training in energy management and integrated judgment of altitude, airspeed, and thrust.
Common human-factor issues observed in all four accidents included: Significant degradation of situational awareness. Overconfidence in automation, assuming that systems would maintain normal flight. Disrupted monitoring of key parameters due to distraction or attentional tunneling. Failure to perform basic cross-checks due to workload, inattentiveness, or excessive self-confidence. Inadequate vigilance in supervising automation despite being in nominal flight conditions.
Those issues collectively illustrate how human-factors vulnerabilities can magnify the consequences of automation-related errors.
Ⅲ. Conclusion
This study analyzed four major automation-related aircraft accidents (Air France flight 447, Asiana Airlines Flight 214, China Airlines Flight 140, and Aeroflot Flight 593), confirming that the interaction among automation, pilots, and training systems critically shapes aviation safety.
The accident cases collectively demonstrate that although aviation automation significantly enhances operational efficiency and safety, it simultaneously introduces new risks—such as mode confusion, over reliance on automation, and degradation of situational awareness. Those patterns indicate that accidents become far more likely when complex automation design, insufficient training, and human-factors vulnerabilities converge.
First, deficiencies in mode awareness were identified as primary triggering factors across all accidents. Pilots failed to detect AP/AT mode changes, partial disconnects, and mode conflicts, leading to airspeed/altitude monitoring failures and misjudgment of aircraft energy states. These issues are closely related to opaque automation logic, insufficient warning systems, and un-intuitive mode displays.
Second, degraded manual-flying and energymanagement skills contributed significantly to accident severity. Under automation-oriented operating environments, flying skills such as stall recovery, upset handling, and integrated thrust–pitch–trim energy control were inadequately trained. Insufficient UPRT directly led to delayed stall recognition and incorrect recovery actions.
Third, loss of situational awareness and insufficient cross-monitoring were common across all cases. Pilots failed to recognize automation’s hidden areas and repeatedly missed critical cues such as altitude and airspeed deviations. Combined with communication delays, distraction, and authoritygradient issues, these CRM deficiencies undermined the final layers of defense.
Fourth, the findings underscore the need for a comprehensive reassessment of pilot-training systems. UPRT, mode-awareness training, automation-error response, and manual-flying instruction are not optional, they are essential. Alignment with ICAO, FAA, and EASA competency-based training (CBT) frameworks must be strengthened further.
Lastly, although automation forms the backbone of modern aviation safety, it cannot replace pilots. Rather, the demands for expert supervision, judgments, and timely intervention has increased. Pilots must act not merely as users of automation but as supervisory controllers who understand systems limitation and respond decisively during abnormal situations. The future of aviation safety lies not in automation alone, but in the effective integration of human and automated systems.
While this study proposes improvements in training, design, and operational structure, it is limited by the absence of simulator-based performance data and variations across airline pilot training programs. Future research should incorporate broader automation-error data sets, quantitative analyses of pilot performance, and comparative studies across various aircraft types.
In conclusion, ensuring safety in an era of growing automation requires more than technical solutions—it demands an integrated safety ecosystem encompassing design, training, and organizational culture. Aviation safety in the automation era can be enhanced only when pilot expertise and automation technology function in a mutually complementary manner.