Over the last few weeks – just as people are still trying to understand what Francis means and when the NHS is dealing with it’s own structural changes – we’ve seen safety very publicly rear its head again in a most unpleasant way. Surgery at Leeds General Infirmary was suspended because data suggested there were safety problems. Was this the right thing to do? And what can we learn from it?
As always, let me look at the problem from another perspective. Earlier this year a major aeroplane manufacturer had to ground every single one of their new airliners because of a couple of potentially significant incidents. No one was killed or injured, yet to ground the plane was a major step.
My first question is: what data was used to make the grounding decision? Simply that two incidents occurred which, under slightly-altered circumstances, could have been much more serious? In that case, it was enough ‘data’ to make the decision.
My second question is this: what were the implications of grounding the planes? Well there’s an enormous cost to the manufacturer and to the airline business of doing this (costs for one airline alone were running into the millions per month). And for passengers it was inconvenient as schedules had to be changed or delayed. But to quote from aviation, ‘if you think that’s expensive then try having an accident’.
Healthcare has one big difference. If you suddenly cut capacity it’s not just ‘a little inconvenient’, but it’s also potentially dangerous – perhaps life-saving operations can’t take place and people could die by default. So what do you do?
Clearly there’s a risk balance to consider. But that risk balance isn’t just about the here and now of the Leeds case, it’s also in balancing the risk of the message it sends out to everyone practicing in the NHS. If unsafe care is allowed to continue because it’s more convenient then we might as well stay in the dangerous past.
The argument about Leeds has centered on ‘the data’. What does it show?,Shouldn’t patients be allowed to judge or, as some have suggested, do patients really understand the data?
My own opinion is that as a patient I’m not an expert on surgery or statistics. I expect professionals to make those judgements and tell me the outcome – yes or no, surgery within acceptable practice or not.
But – and I’ve warned about this in previous blogs – you can’t just use figures and ‘hard’ data to judge safety. If you achieve safe outcomes it doesn’t mean you will do each time. It’s the process that’s important.
For example, are surgical teams behaving in acceptable ways with each other and following best practice to ensure safety? Are they using the WHO Checklist correctly? This sort of judgement can only be made by observation or perhaps what we sometimes call ‘enhanced oversight’ in aviation.
And if the outcome of a review like that in Leeds is that the processes aren’t good, then the next question is ‘what is it that makes their current practice acceptable to them?’ Is it training, the culture and leadership of the organisation etc? Or is the problem elsewhere in the organisation and not with the surgical team at all? Remember, safety problems are almost always the result of a range of causal factors, not just one root cause.
I have complete confidence that the aviation regulators, airlines and manufacturers will deliver safe aeroplanes to me and my passengers, and that when problems occur that they’ll be dealt with in a thorough way to ensure that safety can be consistently delivered – through science and knowledge, not luck. Processes and practices at multiple levels allow me to put my trust in it.
Do you have the same trust in healthcare?
Martin is a pilot and the founder and current Chair of the Clinical Human Factors Group, www.twitter.com/MartinBromiley.