All over the place

3369

Adrian Park analyses the 2011 crash of Manx2 Flight 7100.

Are we safe?

If this question were asked of the crew of flight 7100, a Metroliner flying from Belfast to the Irish airport of Cork on 10 February 2011, what would the answer have been? Variants of the following are likely (cue a slightly perplexed flicker across the faces of said crew): ‘well, we’ve been deemed competent by our company check and trainers’ or ‘we’ve had no recorded defects on this aircraft in the last three months’, or ‘well, the regulator is happy enough with us’, and even—the old classic—‘we have never had an accident’. All true enough, but were they safe?

If only that question could have been asked (and answered) with the benefit of hindsight. Flight 7100 would attempt three approaches ‘below minimums’ to Cork with the third ending in an inverted crash-landing. During this third and final approach, the power levers were retarded (either intentionally or inadvertently) into the beta range causing asymmetric thrust to flip the aircraft onto the runway, killing six of the 12 occupants.

If accident investigators could have gone back in time and challenged the crew and managers of flight 7100 before take-off the investigators could have said: ‘Actually, no, you are not safe and this is why: your pilot in command (PIC) has spent almost all his four years as a co-pilot.

He is newly promoted as a PIC, has only 25 hours command and is about to operate in conditions even a seasoned pilot would find challenging. Yes, he is diligent and hardworking, but your own internal safety audit has stated the weather conditions in the area require experienced crews.

Your PIC has not received the requisite command training. The co-pilot is also hard-working and diligent, but has not received the requisite line checks; and both crew members are significantly fatigued. This is in breach of your own policy, which is written in Spanish, and which English-speaking crews are unable to read.

‘You are not safe because crews have developed a culture of “holding over” defects as a result of having no allocated line engineer. There is in fact a defect, an asymmetric beta-thrust differential between the engines, which will reduce the time the crew has to react when the beta range is accidentally activated during the accident landing.’

‘You are not safe because although the “regulator is happy” they are in fact ignorant. Your aircraft is operating on transport routes under a Spanish air operator’s certificate (AOC) supervised remotely by the operating company. Each of the regulatory authorities believe the other is maintaining oversight, and any audits conducted have been assessed as “superficial”.

‘You are not safe because your organisation and your regulators have set the preconditions for you the crew to make an accident-causing decision.’

Are we safe? I often ask this apparently simple question when kicking off a human factors course. Imagine a simple slide, black background, bold font and the question gleaming brightly on the screen. Deliberate pause for two or three seconds. Scan the faces. Flickers of perplexity…Then the silence often broken with ‘well, we haven’t had a crash…’ Deliberate pause again.

‘But are we safe?’

Then, probably on realising we are supposed to be proactive with safety, the next response is: ‘well, I guess we can always do better’. I press again. ‘Yeah, but are we safe right now?’ Without waiting for an answer, I then click to the next slide and ask ‘well, is this safe?’ It’s a YouTube clip of a fire truck traversing flood-waters so deep the water level is literally only a few inches from the top of the windshield (the windscreen wipers moving back and forth underwater are an especially dramatic touch—you can view it here). The answers are clear and strong. ‘No way, that is not safe, that’s dangerous!’ And yet as the clip continues to roll, the truck emerges from the flood crossing unharmed, water streaming off its chassis, and races off to wherever it was going in the first place.

‘Well no one is hurt, and the truck is fine, so why is it unsafe?’ This of course leads into the inevitable discussion about policies surrounding flood crossings, but the essential question remains: “what does it mean to be ‘safe”?’ Dictionary definitions are not very helpful because most simply state something like ‘protected from harm or danger’. What seems ‘safe’ to one person is not safe to the other. So how can we define it usefully?

In the case of flight 7100 the aircraft and crew were ‘safe’ enough to be ‘signed off’ by numerous regulators, managers and trainers, even though what they were really signing off was a very unsafe inverted collision with the ground. This is a sobering reality-check for pilots, managers and regulators everywhere—how many of us say or think we are safe, never knowing how close we’ve come to the very unsafe reality of an accident?

Again, this is the essence of the problem. ‘Safe’ is either a fickle term easily panel-beaten into shape by commercial pressures, or it is an abstract term easily ignored into irrelevance by the unreality of an accident that seems so far away.

Since I was a lumberjack before I was a pilot (no lumberjack songs please) I need a definition that ‘works’—I need a working concept of ‘safeness’. The safety guru, Sidney Dekker, tells us most accidents are the emergent result of complex variables that in and of themselves appear innocuous (i.e. ‘safe’), but together unexpectedly cause an accident. Being able to quantify (albeit imprecisely) the safeness factor in emergent rather than singular concepts—and being able to do it in a way someone can get their head around—is really important.

So this is how I answer the ‘are we safe?’ question (which may or may not be helpful to others). We are ‘safe’ when the ‘safety-power available’ to an organisation is greater than the ‘safety-power required’. In the case of flight 7100, organisational pre-conditions were such that the safety-power required (weather complexity, safety culture etc.) far exceeded the safety-power available (crew experience, training, fatigue-free alertness etc.).

So what do I mean by ‘safety power available’ and ‘safety power required’? Pilots will immediately recognise where my guiding metaphor comes from: basic aerodynamics.

A guiding metaphor: Safety power available

We are taught an aircraft will fly, provided it is always operated so that ‘power available’ (from the engine/s) is always greater than the power required aerodynamically to overcome various types of drag.

We know an aircraft at slower speeds requires higher power due to induced drag. This ‘power required’ reduces as airspeed increases, until parasite drag begins to dominate, meaning the power required increases again. Operating safely means operating anywhere on the chart where an adequate margin exists between power available and power required. If the laws of safety have anything to do with the laws of physics, then we can say an organisation should always operate in the area where there is ample margin between safety-power available and safety-power required.

There are some important implications from conceptualising ‘safeness’ in this way. Firstly, the organisational rate (its operational rate and/or operational growth) produces various types of ‘safety drag’.

At high organisational rates (and pressure), such as for the Manx2 Metro at Cork Airport, certain types of ‘safety drag’ dominate. These were crew fatigue, thoroughness trade-offs, managerial under-vigilance and other failings, which resulted in decreased safety-power available.

To be ‘safer’, safety-power demand should have been reduced by decreasing the complexity of the operation. This would have increased the ‘margins’ to an adequate level.

If this was not possible, then the safety-power available should have been increased. This is the second implication. An organisation has available to it a certain amount of safety resources such as skill, experience and training.

For Manx2 to be ‘safer’, this available safety power (these skillsets, experience, training) should have been increased. In any case, situations of high safety-power demand with low safety-power available should have been avoided—the exact condition for flight 7100.

Flight 7100’s last journey was an operation involving high ‘safety-power required’ and low ‘safety-power available’—a dangerous combination apparently undetected (or ignored) by managers and regulators.

Operating in an area prone to fickle fogs and severely reduced minima in an ageing, non auto-pilot fitted aircraft such as the Metro III represented a high ‘safety power’ demand.

The investigators were convinced such an operation required crew members experienced enough to negotiate such demanding weather skilfully, and experienced enough to say ‘no’ when appropriate. Instead, the crews assigned to the Cork route were extremely inexperienced, fatigued and culturally normalised to regularly fly below safe minimas. During the accident flight, the safety power required far exceeded the safety power available and a landing turned into a life-taking inverted crash.

I find conceptualising safety as power-available versus power-required helpful. It makes the abstract more concrete and enables you to direct attention where most pilots intuitively know it should be directed—towards adequate resources, training and experience. However safety is conceptualised, it is sobering to think if flight 7100 had successfully landed that final time, with white-faced, tight-knuckled pilots breathing a sigh of relief (rather than breathing their last), and those pilots and their operator had been in a human factors class (perhaps some months later), what would they have answered when asked ‘are we safe?’. Probably the standard …‘Well, we haven’t crashed yet’.

Leave a Reply