This site is intended for health professionals only


QOF comparisons

by Dr Gavin Jamie
8 February 2016

Share this article

One way to see how you can improve performance in your surgery is by looking at how other practices have faired in their QOF points

In Greek legend, Sisyphus was punished by Zeus to eternally push a boulder up a hill only for it to fall back to the bottom just before Sisyphus reached the top. The pursuit of the quality and outcomes framework (QOF) points can feel
much the same. The points are gradually pushed upwards until the end of March when you have to start all over again.
Later in the year, after processing by the relevant departments, the details are published. These can be a useful resource for practices to compare their performance both locally and nationally.
If your practice is significantly behind other similar practices in a particular indicator then that could represent something that can be improved relatively easily. We can also see trends of improvement or potential problems by comparing the data to previous years.
The most recent data available is for the QOF year, which ended in March 2015, and there has been little change to the indicators between that year and this (April 2015 – March 2016). There were fewer points available, down to 559, with much of the cash being transferred to the global sum or the new unplanned admissions directed enhanced services (DES).
Several unpopular indicators around erectile dysfunction, biopsychosocial assessment for depression and enquiring about participation in exercise have been removed, along with the large ‘quality and productivity’ domain.
With all of that in mind achievement was a little higher than the previous year with a typical practice missing fewer than 17 points. Indicators with a clear patient benefit have always tended to see higher levels of achievement than those, which seemed purely administrative.
The time in which blood pressure measurements in patients with hypertension would count towards the QOF targets was extended from nine to 12 months and this seems to have also helped practices to improve results. Although the number of points available had been cut, these targets are still some of the most valuable in the QOF.

Points’ performance
The published results show that missed points are concentrated in a relatively small number of indicators.
Low numbers of patients can be a problem for practices, particularly where indicators have been introduced that only apply to very small populations. If a practice has no patients who qualify for a particular indicator then they cannot gain any points for that indicator. Practices with lower than average list sizes are more likely to have problems but the random nature of disease means that all practices could potentially be affected.
There are two areas where this has a significant effect on practice income. The first is osteoporosis, which carries nine points spread across three indicators. That can represent quite a large amount of money to practices. In the year leading up to March 2015 more than a quarter of practices in England lost three or more points due to low numbers of patients and more than one-in-10 practices missed out on all nine of the points.
Of course if no patients have had a suitable fracture in the specified time (after 1st April 2012) then there is nothing that can be done. For many practices, though, there is a clue in the data that this situation can be remedied this year. In 2014/15 a change in the rules for patients over 75 years of age caused a huge drop in the number who qualified for the indicator – and a corresponding increase in practices missing out on points. The rules are the same this year and the lessons learnt from last year need to be applied now.
Until March 2014 the rule was that all patients over the age of 75 who had a history of fragility fracture since April 2012 should receive a bone protective agent. After April 2014 a further code was required with a diagnosis of osteoporosis. There is no requirement for a DEXA (dual energy X-ray absorptiometry) scan if this is not thought to be clinically appropriate and the GP could make a clinical diagnosis based on the history of the fracture. Almost any code that mentions a diagnosis of osteoporosis will count, but, as with the other indicators in this area, an abnormal DEXA result is not enough on its own.
The second area with small numbers of patients are the two indicators concerning patients taking lithium therapy for bipolar affective disorder. These have three points between them. The number of patients who are prescribed lithium therapy has declined gradually but steadily over the 11 years of the QOF. Around one-in-12 practices have no patients receiving lithium therapy. If patients are receiving lithium prescriptions from another source this will not be picked up by the QOF searches. It is not possible to use a read code instead of a prescription.
Obviously it would be inappropriate to start therapy purely for the QOF points and so practices may simply be unable to earn these points.
The other area where practices commonly lost points was reaching blood glucose targets in patients with diabetes. There were three different indicators with a total of 35 points between them with a typical (median) practice missing out on four of them. The actual level of achievement for these indicators has not changed much over the last five years although the point thresholds have changed.
The three indicators are phrased in the same way with HbA1c levels set at 59, 64 and 75mmol respectively. Nearly half of the points are for the tightest target but practices tended to lose points evenly across all three of them.
Some practices who lost points may be reassured that they are not alone, but it can be difficult to decide what, if anything, to do about it. Practices will already be making every effort to optimise the treatment received by patients with diabetes. There is one more clue in the data – exception reporting rates.

Exception reporting
Exception reporting is a difficult and complicated topic. Many patients are excepted automatically, for reasons such as registration or diagnosis dates and the interpretation of this data is correspondingly difficult. NHS England does not publish the breakdown of reasons for exception reporting.
So, treading carefully, the data does show that the exception reporting rate for practices who achieved full points in each of the diabetes indicators is about 50% higher than average.
Practices with more efficient systems for applying appropriate exception reports seem to get a higher number of points. This works best if exception reporting is used throughout the year rather than as an emergency measure in March.
Calling patients for reviews three times – including at least once by telephone – and recording those who decline or fail to attend requires an organised system but may well pay off.
Once patients have come for a review then consider whether they are really having as much treatment as is suitable and tolerable for them. Having this consideration in mind throughout the year could make things much easier when it is close to data submission time.
The publication of data includes the prevalence figures as well as achievement and in this case it is much more difficult to say what is good and bad for practices.
It always pays for practices to increase their disease prevalence as the value of each point is directly proportional to the number of patients on the practice disease register in that area (the palliative care register is the sole exception here).
That seems simple but, as practices are paid relative to the average prevalence, when overall prevalence goes up the cash per patient falls. There is effectively a pool of money allocated by the government, which is then split up according to the number of patients with the condition. This is particularly noticeable in conditions such as diabetes where the prevalence has increased by half in the 11 years of QOF. This has the effect of diluting the payment for points, although the total number of points has varied significantly over that time.
The general trend for most disease areas is an increase in prevalence. Dementia has risen in each of the nine years that it has been included in QOF but had an unusually sharp rise last year – most likely due to the enhanced service for new diagnosis.
The effect of this rise on the payment per point will be more than compensated for by the greatly increased number of points this year.
It’s not all bad news. There has been virtually no increase in the prevalence of stroke over the past few years and the trend in coronary heart disease has been downwards since the introduction of QOF in 2004/05.
Disease prevalence can vary quite widely between practices depending on age, social and economic situation of their patients.
Practices will know their patients better than anyone but it can be worth checking your prevalence against other practices in your area.
If you have a lower prevalence than practices with a similar population then it could be beneficial to look at how patients are diagnosed and coded at your practice. The QOF Database website allows you to easily compare your prevalence figures both nationally and locally.
In this article the areas in which the scores were lower have been concentrated on. But this is in the context of very high levels of achievement throughout the QOF by almost every practice.

Gavin Jamie, full-time GP in Swindon with an interest in health informatics. He runs the QOF database website.