Interview: A culture of failure
Thinking about conducting a survey to measure your company’s safety culture? Think again, says complex systems safety researcher Sidney Dekker, it could produce sociological codswallop or worse, miss the final tweets of the mine canary.
The term “safety culture” was coined by the International Nuclear Safety Advisory Group (INSAG) in its response to the Chernobyl incident, and has since become a useful tool in the investigation of major hazard facilities in Europe, the US and Australia in determining the knowledge that individuals and the organisation has about risks and safety practices in the lead up to a major disaster.
INSAG’s definition for safety culture in its 1991 Safety Series report is “that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, nuclear plant safety issues receive the attention warranted by their significance.”
But Dekker, the director of research at the Lund University Center for Complexity and Systems Thinking, also author of Just Culture and the forthcoming book, Drift Into Failure, urges safety professionals to think twice about its use when sizing up a company’s safety culture through surveys.
“Every social scientist has been trying to get their heads around [culture] and they still haven’t succeeded. And if [David Emile] Durkheim didn’t succeed in the mid-1800s, why do we believe we can succeed in finding out what safety culture means in 2010? It’s heuristic, it’s arrogant and it’s useless.”
“Culture cannot be reduced to the individual component,” he explains. And in that sense, it can’t be added up either - at least not in the way that some practitioners have done when collecting the views of many to come up with a “culture”.
Attempts to measure culture in this way amount to “Durkheimian bullshit”, says Dekker, referring to nineteenth century French social theorist.
“Culture is an emergent property that cannot be brought back to the individual,” he argues.
So should ‘safety culture’ be scrubbed from the OHS practitioner’s vocabulary?
“No, no, no,” says Dekker when asked by OHS Professional. “But I want to make people acutely aware of the deeply problematic nature of that label. When you see how it is used, very often we equate culture with a simple accumulation of what individual components of people in the system feel about certain things.”
Still, culture - or at least the interactions and interdependencies that emerge to help produce it as individuals go about their business - underpins his analysis of why businesses drift into failure on safety.
His aversion to pseudo-scientific attempts to measure culture drives his meditation on failure, which puts a magnifying glass on the “normalisation” of risk that occurs during the drive for business success.
“[Complex organisations] fail precisely because they succeed. They drift into failure by doing what they need to do to get really good at what they are doing... Failure breeds opportunistically, non-randomly, in precisely those activities which we need to produce success,” he explains of some of the territory covered in the book.
The systems of course “fail” in the context of safety because they were never built to produce safety but rather output. However, during the process of delivering output, people also bear witness to the production of safe outcomes - a value pool which can be traded off against for higher production. In the case of disasters, which typically brew over time, these outcomes have the dangerous effect of numbing the senses to signs that often become crystal clear after an accident has occurred. And it’s the behaviour displayed as the senses become numb which should be viewed as the mine canary’s death, he argues.
“The problem really is two fold,” explains Dekker. “If you have got this gradual normalisation where each step is only a really small step away from the previous norm, and if you get away with it - that is, if you keep producing safe outcomes - then you get confirmed that it’s a good idea; keep borrowing from safety.”
The second part has implications for what an organisation’s safety culture becomes under the process of normalisation.
“As these things are increasingly normalised people are no longer likely to report what they see as incidents because they no longer see these things as incidents,” he explains. “It’s normal work to get the job done. Yet it’s precisely these continual normalisations of risk that should be the “weak signal” that managers need to hear.”
The world, in particular the US, has provided numerous examples of major failures that Dekker believes fit his arguments, and perhaps could have been avoided if we had a different way of thinking (and talking) about complex systems and organisations.
Failures where early warnings were known but considered an “acceptable risk” prior to disaster include the current US and BP Gulf of Mexico disaster, the 2003 Columbia space shuttle disaster, as well as the sub-prime mortgage crisis and its global aftermath.
“After the fact we can come in and say, ‘Oh god, we took on way to much risk than we should have’. But that’s a completely different perspective from sitting in the inside and having only a very limited historical view, and saying, ‘Yesterday we did this and got away with that, we may as well borrow two more cents from here,’ and then gradually we drift into failure.”
The major problem Dekker sees is that our theories - on organisational and human behavior - have fallen behind technologies used in production, and that we have no way of conceptualising risk because the language that could help understand it is not widely used.
The lag is having an impact on risk mitigation strategies.
For example, he says, Newtonian theory would say that we can find broken components, which in an organisational context means “human elements”.
“We can see how the properties of the system get exhibited fully and perfectly by components. That is, if a system fails, it’s because a human screwed up, and we can find the broken component, we read the riot act, or we put them in gaol. And those Newtonian responses I think are not only counter-productive, but are quite misaligned relative to the real complexity of the systems we try to run.”
Finding a new way of discussing safety is in part what his new book is about. If he gets his way, the safety profession would be talking about “tipping points” - a term brought into sociology by author Malcolm Gladwell - over, for example, “zero harm”.
“We need to reinvent our language. We need to look at these systems in different terms,” he says.
“If we make people aware of what a tipping point is... and give people that new language we are in a better position to harness this complexity, and perhaps manage drift or prevent drift.”