This site is intended for health professionals only


Safety and responsibility: taking a broader view of risk management

12 December 2008

Share this article

Fiona Dalziel
MA(Hons) CIHM FIHM

Independent Consultant in Practice Management

Fiona is an experienced primary care trainer and facilitator. She is the national RCGP QPA Adviser and has advised on both the
original and the review of the Quality and Outcomes Framework of the 2004 GP contract

Practice managers are all familiar with the term “risk management”; we have to demonstrate that we actively undertake it to comply with the Health and Safety at Work Act. But this limited take on risk management fails to recognise its important application in the practice in terms of management of patient safety.

Current quality initiatives such as the Quality and Outcomes Framework (QOF) and the Royal College of General Practitioners’ Quality Practice Award (QPA) are based upon not only improving clinical outcomes for patients, but ensuring that their safety is adequately protected to minimise the chance of exposing them to predictable dangers.

The following are examples of this from the QOF and the QPA:

  • Newly registered patients have their notes summarised within eight weeks of receipt by the practice.
  • The practice has undertaken a minimum of 12 significant event reviews in the past three years.
  • The practice adheres to the requirements of the Misuse of Drugs Regulations 1985 and 2001 and the Medicines Act for the storage, prescribing, recording and disposal of controlled drugs.

One possible unintended effect of the annual round of gathering clinical data, managing call and recall systems and recording organisational evidence is that many practices may have lost sight of opportunities to look more widely at patient safety issues.

Risk management in a wider context
How many practice managers can say they spend the amount of time they would like to on quality activities not specifically outlined in the QOF? Yet this is just the type of activity for which practice managers would all want to be known – for our ability to run a practice efficiently and provide good quality of care that does not place patients in harm’s way.

How can we develop a new perspective on this topic and gain more understanding of how to do it better? Getting patient safety right is dependent on a good understanding of the principles of risk management. However, this may sound as if it has considerable potential to be, at best, slightly dull and at worst that you would rather put your copies of Management in Practice in date order.

Let’s take a broader look at risk management. From Health and Safety legislation, we are aware of risk registers, routine risk assessment and risk management. We have disaster plans in place to use in the event of catastrophe. However, we less often apply the principles of risk management to other systems or procedures in the practice.

Some systems in the practice carry considerable risk of harm to patients. The best example of such a system is the results management procedure. Defence organisation statistics demonstrate that failures in the results system are frequently the source of actual injury to patients. There are, of course, others, eg, repeat prescribing and message systems.

A high-risk system is one where, should the system fail at some point, the final impact of that failure will be harm to the patient – an accident happens to the patient because of the system failure.

But what causes systems to fail in the first place? Professor James Reason distinguishes between individual and organisational accidents.(1,2) Individual accidents are those where the cause and the victim of an accident are the individual themselves: for example, a driver falls asleep at the wheel of his car and drives into a tree, causing himself injury.

We are looking, however, at organisational accidents. In an organisational accident, a hazard that is a threat to a system manifests itself, the system’s defences do not cope with the hazard and “losses” (eg, accidental injury/loss of money, etc) occur. In general practice, therefore, we are looking at minimising organisational accidents in order to protect patients.

Losses or accidents occur when a system’s defences are breached by hazards. Professor Reason illustrates this for us with his “Swiss cheese” diagram as shown in Figure 1.(1)
More on the cheese later.

[[Fig 1_Dal]]

Luck and judgment
It is easy in the practice to be lulled into a false sense of security about systems’ safety. No accidents or near misses may have happened in a particular system for a very long time (at least, that you are aware of). No significant events have occurred.

However, this may just be down to luck. Over time, a lack of accidents reassures system users that the system must be working correctly. This is just the point at which an accident is most likely to happen. Safeguards and checks in the system will have become weakened over time. Warnings may be habitually ignored and steps or checks habitually omitted. Training new users may have become lax and diluted.

The slices of cheese in the illustration are defensive steps in a procedure. The defences protect the system against failure. Checks and safeguards are in place, which should allow corrective action to be taken if something has gone wrong. In a system in general practice, these defences (or slices of cheese) might include:

  • Pop-up warnings on a computer screen at the point where someone has made a decision (eg, to prescribe).
  • A final checklist to use at the end of a series of actions.
  • Checking that the number of items processed matches the original number of items put for processing (eg, scanning).
  • An onscreen warning that a step in a procedure has been missed (eg, information has been missed from a pro forma).

You will have noticed that the cheese is Swiss. The holes in the cheese represent failures in the system’s defences that allow a hazard to pass through.

In normal circumstances, a system’s defences all do have holes in them. Although a hazard might pass through one defence, the next defence in the system should stop it in its tracks.

However, a system where the defences have become weakened may have lots of holes, and so the chances of them all lining up and allowing a hazard to cause an accident are increased.

What causes the holes in the cheese? Some are caused by people themselves using the system, probably under pressure, in a way that challenges the integrity of the system’s defences.

A team member may be in a hurry and not perform a check, or may override a warning for the sake of expediency, possibly reinforced by the fact that nothing happened as a negative consequence the last time they took that action.

Alternatively, the system may have been designed with a fault that nobody has really noticed, which is as a potential breach in the defences. A member of staff may have been inadequately trained or supervised. The system itself may be clumsy and unworkable or the computer software and/or hardware on which it depends may be unreliable or inadequate.

As we are aware from significant event analysis, we need to look further when understanding an accident than the individual who, when doing their job, committed the action that was the accident’s end cause.

We may well find, upon detailed analysis, that the chain of events leading to the final outcome was, from the conception of the whole system, lying in wait to happen; that it had been designed in from the start and eventually something was going to trigger the necessary sequence.

Taking care to protect patients
What can we do in practice to use this information constructively in order to protect patients? We can see the importance of ensuring that systems have safeguards or defences built in at appropriate points to prevent hazards passing through. When designing a system, we must take care to ensure that we are actively including these defences.

Additionally, we must take care that our own system design does not include any intrinsic flaws that are accidents-in-waiting. Equally, we must ensure that we do not design systems that are so defence-heavy that they are cumbersome and unworkable. A system of this kind will take so long to execute that it takes up too many resources – particularly staff time.

Building super-safe systems that are also clumsy means that the people using them start to find ways around the system in the cause of just getting the job done in time. The trick is to design a system sufficiently robust to protect from as many hazards as we can reasonably predict, and which is also efficient and user-friendly enough not to be a waste of resources.

The level of protection built into a system will depend on the level of risk to patients an accident in the system would pose. To give an example, if we are designing a new system in the practice for message handling, then at each stage of the procedure we would want to ask the following questions:

  • What could go wrong at this stage (hazards)?
  • How likely is it that this would happen?
  • If it did happen, what might be the consequence(s)?
  • If that would happen, who would be affected?
  • How serious could the impact of that event be?

The level of defence the practice needs to build in at this stage of the system will depend on this risk assessment. The practice would need to make a judgment based on risk to patient safety about how much resource to commit to protecting patients from potential accidents arising from hazards they have identified.

We can start seeing the potential impact on patient safety of a good understanding of the principles of risk management in its broadest sense. The more attention we are able to pay to hazards and defences when designing systems, the better-protected patients will be.

As mentioned, there will always be the potential for a chain of events that may lead the team members using a system to make decisions that cause an accident. On the other hand, there will also be the potential for a sequence of undefended hazards to pass all the way through a system, only to be rescued at the last minute by the action of a team member.

Our responsibility as practice managers is to understand how to manage risk and patient safety. If we can develop systems based on a better understanding of hazards, accidents and defences, we will be making a difference.

References
1. Reason J. Human error: models and management. BMJ 2000; 320:768-770.
2. Reason J. Managing the risks of organizational accidents. Hampshire: Ashgate; 1997.