As anyone who has sat through a dud movie will attest, there is a big difference between looking at something, paying attention, and being truly engaged. This becomes a problem for pilots monitoring a flight because, most of the time the script is familiar, uneventful, and has a happy ending. It is a particular problem in modern automated flight, where monitoring matters as much, if not more, than actually manipulating the controls.
The role of pilots is under increasing scrutiny in our current environment of increasingly sophisticated and ultra-reliable aircraft system automation. Professor David Woods of Ohio State University, speaking at the 58th Air Safety Forum in 2012, described modern automation systems as being of ‘cumulative, creeping and inadvertent complexity’.
Modern aircraft systems are increasingly reliable: Boeing, for example, ‘want to have amazingly reliable aircraft’, designing procedures based on a 10-5 failure rate. Most of the time, Wood says, ‘everything looks super smooth when it works … until it doesn’t’—QF32’s multiple system failures were at a rate of of 10-13; something so unprecedented we had to coin the term ‘black swan’ to describe such failures. Therein lies the dilemma. How to best equip pilots to manage such events when most of the time, the system works exactly the way it is designed to?
Having a captain and co-pilot on the flight deck, transitioned to the flight deck pilot flying and pilot not flying and then to pilot flying and pilot monitoring. These changes are not mere semantics, but an attempt to recognise the shifting role of pilots. Dr Wayne Martin, a B-777 pilot and Griffith University aviation academic, says the ‘jump from pilot not flying to pilot monitoring coincided with a massive increase in automation, so that monitoring becomes a specific role for one pilot and an inherent part of the role of pilot flying’. According to the US Federal Aviation Administration (FAA) in 2003, ‘it makes better sense to characterise pilots by what they are doing, rather than by what they are not doing’.
Recent scrutiny has come about especially because, as Martin explains, ‘poor monitoring has been identified as one of the precursors to loss of control accidents’. The National Transport Safety Board’s (NTSB) Robert Sumwalt, opening the ‘Pilot monitoring in today’s modern flight deck’ session at the Airline Pilots Association International’s 59th Air Safety Forum in July 2013, said ‘monitoring is a problem that never went away … having been identified in a 1994 NTSB study where poor monitoring was an issue in 84 per cent of accidents’, and being described there as an ‘issue ripe for improving safety’. Helena Riedemar, ALPA’s human factors director, and airline pilot, reinforced this by quoting Line Operations Safety Audit (LOSA) data revealing ‘pilots with poor monitoring skills are at least twice as likely to make a mistake as are pilots who monitor effectively’.
The key words here are ‘monitor effectively’. What do they mean? Recent workshops and discussions have focused on the urgent need to define the role of pilot monitoring. Pilots typically are told what to monitor, but they are given little guidance on how to monitor. In April 2013, the UK Civil Aviation Authority (CAA) released Monitoring Matters, guidance on defining and developing good monitoring skills; and in November 2014, the active monitoring working group, comprising pilot, airline, regulatory, association and manufacturer representation, released their final report: A Practical Guide for Improving Flight Path Monitoring. Understandably, there is considerable crossover between the two, particularly in their identification of the necessity for active monitoring. Captain Chris Reed from JetBlue Airlines, a member of the working group, explains that ‘active monitoring is looking for something, not just looking at something’.
The UK CAA and Flight Path Monitoring reports both focus on what the PM should be doing at the critical phases of flight, which Flight Path Monitoring classifies as ‘areas of vulnerability’. However, the CAA’s report provides a more detailed breakdown of types of monitoring, acknowledgment that human beings have vulnerabilities when it comes to maintaining focus on unchanging tasks, in low- or high-stress situations, when they are fatigued, distracted, complacent or bored. Key Dismukes, a NASA human factors expert, said ‘The human brain just isn’t very well designed to monitor for an event that rarely happens … we’re not well designed to monitor for a little alphanumeric on the panel, even if that alphanumeric tells us something important’. And according to Helena Reidemar, ‘The human brain filters out information it considers unchanging’.
The CAA’s categorisation of monitoring recognises these human limitations, advocating certain types of monitoring activity for different phases of flight.
- Passive monitoring – keep an eye on, maintain regular surveillance, listen to
- Active monitoring – cross check, oversee, report on
- Periodic monitoring – check over a period of time
- Mutual monitoring – cross check, watch over, oversee, report on
- Predictive monitoring – advise, urge
In Annex D these monitoring activity types are mapped across various phases of flight—and prioritised according to how critical the procedure is; for example, adherence to minimum safe altitude (MSA) calls for predictive monitoring, consisting of monitoring the aircraft altitude relative to the MSAs shown on the flight plan, and alerting the pilot flying (PF) to any perceived hazard; as well as advising the PF when high MSAs are active and the duration of this.
This has parallels with the Flight Path Monitoring report’s ‘areas of vulnerability’—as the report explains. ‘To perform effective flight path monitoring during periods of high workload and increased vulnerability to flight path deviations, it is imperative that pilots predict when and where these periods will occur and prepare for them.’
The Flight Path Monitoring report makes 20 recommendations to improve flight path monitoring, responsibility for which rests primarily with pilots and airline operators. However, as our feature on the human-machine interface discusses, pilots and airline operators are part of a complex system which includes manufacturers and aircraft design. As Wayne Martin says, the system needs to provide information ‘specific enough to be useful, but brief enough to be comprehensible’. The aviation system relies on pilot mitigation skills: automation design needs to incorporate human factors principles, placing pilots at the heart of that system so that they can do what they do best.