Associate Professor Gemma Read, Director of the Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast.
By her own admission, Gemma Read is by no means a human encyclopedia on the precise details of aircraft and accidents. Her area of knowledge is broader, and deeper: the nature of safety itself. She has a background in psychology: ‘I was interested in why people do what they do.’ Her career began with Transport Safety Victoria where she encountered the field of human factors and how human factors issues manifest in the real world. A PhD under the auspices of Monash University’s Accident Research Centre followed and opened the door to an academic career.
She oversees interdisciplinary research into how human factors and systems science interact. This approach is based on the understanding that simple approaches to accident analysis and safety (in aviation these involved the unhelpful term ‘pilot error’) do not fully consider the inherent complexity in transportation systems, nor the full range of factors that shape human behaviour.
A systems and human factors approach can reveal causal factors in accidents that could previously only be described as tragic mysteries. Knowledge of these factors enables change to make future accidents less likely.
Flight Safety Australia spoke to Gemma Read about how human factors and systems research has changed the way experts think about safety, how some ‘common sense’ ideas about safety fold under scrutiny and what lessons from surface transportation can bring to aviation safety.
First, what do we mean when we talk about human factors?
Human factors is a field of research and practice that focuses on understanding the capabilities and limitations of people; it uses this understanding to design systems, technology and organisations that optimise safety and performance. It’s a broad field!
Sometimes human factors work can focus quite a bit on individual performance, or the design of technologies used at the ‘sharp end’ of work, but in our research, we prefer to take a broader systems approach.
Can you define a systems approach?
I always start by saying: a system is just where we’ve got multiple components that are working together to achieve an outcome or goal. And lots of things are systems; there are technical systems, and safety management systems and we also have sociotechnical systems.
Taking a systems approach means considering the overall system as your unit of analysis. For sociotechnical systems, this means looking beyond the individual and considering interactions between humans, and between humans and technology within the system. It also involves considering the broader organisational, political or social system in which behaviour or outcomes take place.
Sociotechnical systems theory goes back to the 1950s and came from studies of work in coal mines in Britain. It talks about how you can create systems that have adaptive capacity, the idea being that you want to jointly design the social and the technical parts of the system so they work together, rather than focusing on one or the other. And if you do that design process well, then the system can be adaptive in terms of being able to respond to external disturbances like increased commercial pressures or unexpected events.
What’s the main implication of the systems approach to accidents?
The central idea is that we can’t truly understand safety or human performance by breaking the system into its component parts and examining these in isolation. It’s the interaction between the components that’s important. By looking at the whole system, rather than focusing on the decisions and actions of individuals, we reduce the focus on blame and instead can identify systemic improvements to avoid similar events from occurring.
In your work with rail and road transport, did you become aware of work from aviation in human factors?
Definitely. Aviation has always been a strong area of human factors application – the founding area of human factors. So in my training for example, we would take a lot of the case studies from aviation and apply them in other domains.
But one of the things about human factors having a long tradition in aviation is that it can carry some old ideas with it, such as human error.
Error is a very limiting perspective, particularly post-incident, because some investigators stop at finding that there was human error – but they haven’t really learnt much because you could put another person into the same situation and they would probably, at some point, make the same decision. Thinking in terms of error stops us from broader learning about the system and what can be changed at higher system levels. It can also create a blame culture where workers are less likely to speak up about near misses or openly discuss what happened during an adverse event.
We reduce the focus on blame and instead can identify systemic improvements to avoid similar events from occurring.
What have you found about the role of cognitive bias in accidents?
One of the case studies that we often discuss is the Kerang level crossing accident that occurred in 2007. It was catastrophic – a truck struck a train and 11 passengers were killed.
The level crossing had flashing lights but no boom gates. The truck driver had driven through that level crossing once a week for 7 years, on the same route around the same time and he’d never gone over the crossing with the lights flashing. But his truck was delayed that day, and he arrived at the same time as the train.
Years of exposure to a non-event makes it possible to develop a strong expectation of no train coming, no lights flashing. Because the way our mind works, we don’t have the capacity to take in everything that is happening in our environment, so we use our expectations to help us to search the world for what’s important. So when we’re looking and visually scanning, we’re looking for the things that we expect to see, and most of the time this strategy serves us well.
In this occurrence, the driver didn’t expect to see the lights flashing, and thus, did not perceive them to be flashing – what can be called a ‘looked-but-failed-to-see’ incident. Research suggests such events are somewhat common in driving. Even things that are quite conspicuous can be missed in the face of strong expectations.
In aviation with ‘see-and-avoid’, the same issues would likely be happening in a more difficult visual environment.
The figure for alerted ‘see-and-avoid is that it is 8 times more effective than ‘see-and-avoid’ by only looking.
Exactly. Part of that difference is in expectation, but it is also worth noting that the flashing lights were themselves an alert; so it can really come down to thinking about how we design the whole task and how we can best design alerts and warnings to be effective.
James Reason said the best people make the worst mistakes.
It is timely to talk about James Reason’s legacy in safety and human factors, given his recent passing. The truck driver in the Kerang incident was charged with culpable driving causing death. He went to trial and was acquitted by a jury. There was considerable evidence presented that he had a clean driving record and was a mature driver who took his driving responsibilities seriously. I think it is clear in this case that we have to look at the systemic factors surrounding the accident.

What does Kerang tell us about system defences as opposed to individual actions?
Any administrative control is never going to be as effective as something that’s built into the environment or into the interaction. Aviation has checklists but there have been accidents when crews have read a checklist without actioning it, often under time pressure. Obviously, checklists are better than working purely from memory, but they are not enough in themselves.
At the Kerang crossing, there was no physical barrier. There is research to suggest that if we have the boom barrier as well as the flashing lights, it makes the warning more conspicuous. Our attention is more likely to be drawn by something that is physically moving in the environment.
What’s the importance then of studying why things go right?
I’m sure you’re aware of Erik Hollnagel’s Safety-II concept1, which is about how we can learn from successes, or normal performance, not only from what goes wrong. We tend to focus on what has gone wrong because it’s the unusual event. We think we can delve into that, find out why things went wrong and then fix the system, assuming that it will then be safe.
But we’re missing all these opportunities to understand normal, everyday performance when things go right, or perhaps where there has been a potential accident or breakdown, but actions have been successful in recovering the situation and creating a safe outcome.
I guess my focus is on ‘how do we support people to make good decisions?’ If we focus on that, it’s also a better more positive conversation to be having with workers, organisations and other stakeholders.
Thinking in terms of error stops us from broader learning about the system and what can be changed at higher system levels.
What does a Safety-II looking at what goes right approach tell us about the role of automation?
The traditional view has been that automation is useful as we reduce the opportunity for ‘human error’ by taking the human out of the loop.
But there are a couple of problems with this assumption. The first is that humans have positive adaptability or positive variability in their performance. So humans can respond to unexpected scenarios in a way automation cannot. A designer of an automated system can’t necessarily predict every single circumstance the aircraft might experience. Whereas, if you have a highly experienced pilot, they can create more generalised rules and think more about first principles to solve problems. When I was studying psychology, we had a lecture about the Sioux City crash landing back in the late ‘80s (United Airlines flight 232). This incident is an example of human adaptability and excellence that automation can’t match.
What are your thoughts about the rise of artificial intelligence (AI) and its effect on safety systems?
In safety we will have to think very hard about how we use AI. We’ve had automation for a long time and we still have issues with it. With AI, it’s more of a black box – we don’t necessarily know why the AI system might be behaving in the way that it is, why it’s made a decision or how it’s weighted different data points, for example. So trust in AI systems is currently a big topic in research and there is a push towards explainable AI, where systems can explain the rationale for decisions. Related to that are questions about how to optimise human-AI teams, so that we don’t just focus on developing new AI tools but are thinking about how humans and AI will work together in an optimal way. Really, we are facing a new era of designing sociotechnical systems where both humans and technologies are intelligent agents.
Thinking in terms of error stops us from broader learning about the system and what can be changed at higher system levels.
Check out these resources:
- Safety behaviours: human factors for pilots and engineers resource kits
- Safety management systems resource kit.
Available at shop.casa.gov.au
Sources: 1Safety-II: As aviation safety science has matured has come the realisation that the absence of accidents and incidents does not guarantee safety.
I find the comments about the Kerang Train Accident rather strange. If the driver had never seen the rail crossing with the lights flashing previously, I am surprised that an alert driver’s attention would not have been captured by “something different”, ie the flashing lights operating. I would not have thought boom gates would have made any difference – the barrier is not that visible at a distance, and the driver may miss the additional lights on the boom just as they missed the main lights.
In level crossing accidents, there are 3 factors often involved:
1. Driver fatigue – drivers not taking notice of their bodies telling them they are too tired to drive. This also applies to aviation.
2. Knowing the train is there, but “experience” telling a driver that they will beat the train to the crossing. A deadly race then ensues in order to save lost time. This is akin to “get-there-itis” in aviation, for example trying to race bad weather.
3. Distraction – such as a mobile phone. It’s just too tempting for some people driving along a quiet country highway to use their phone. A driver who regularly travels a particular route could be especially vulnerable to normalisation of deviation, that is, “I’ve done this hundreds of times without incident so it’s safe to do it again”. Pilots also need to guard against distraction, for example by maintaining a sterile cockpit during critical times, such as landing.
An interesting comparison though.