Over recent years there has been a growing trend to use the UK General Practitioner (GP) Quality and Outcomes Framework* (QoF) exception reporting rates as a quality measure or standard. For example, in Leicester, one entry requirement for the Primary Care Diabetes (Enhanced) Service (a scheme to reward more specialised community diabetes care) is:-
The specification requires the provider(s) to (show):In the annual quality review of our practice the overall exception reporting rate is routinely reported as a quality measure, comparing it with the Clinical Commissioning Group (CCG) average, but with no explanation of what this actually is supposed to indicate.
- Evidence of QoF low exception reporting ( less than 10% )
This marks an unwelcome development as I believe it betrays a misunderstanding of the topic. This blog is about why.
For the uninitiated some explanation of the system is necessary. QoF payment is based on points scored in a range of disease areas for achieving certain quality standards. The number of points available for each area such as for how well blood pressure (BP) is controlled is fixed. The actual point score for a practice is calculated from the proportion where the audit criterion was achieved at the year end of a defined percentage range of eligible patients within which points are counted. So if the points available are 10 for this disease area and the point scoring range is 50-90%, if 70% of eligible patients meet the audit criterion, say of good BP control, 5 points (10 x (70-50)/(90-50) are scored.
So for the above QoF area a practice scores nothing if it does less than half the work and nothing more if it meets the QoF criterion in over 90% of those potentially eligible.
So where does exception reporting come in? The key is in the phrase 'eligible patient'. Individual patients who might be suitable for assessment of care in each QoF disease area can be deemed ineligible and exception reported. There are various valid reasons for this. Eg a person with a poorly controlled BP not meeting the target value required may also be suffering from a terminal cancer, in which case worrying about their degree of BP control is hardly relevant. Such a person could be exception reported as 'unsuitable'. Someone else may just refuse to be followed up making it impossible to meet the care standard which can be exception reported as 'informed dissent'. These patients are then not counted when it comes to assessing points.
The rationale for this is to level the playing field between practices, as the number of patients in these groups is going to vary year on year and by population served and to avoid inappropriate treatment.
So why the fuss about exception reports above?
Suppose you have two practices with 100 patients each who are potentially eligible for a particular QOF indicator. Just before the year end one has met the criterion in 83 patients and another in 89. Both are a little short of the maximum 90% target. In the first practice there are 8 patients who could be legitimately excepted, an 8% exception reporting rate. The practice excepts them, achieving 83/92 or 90.2% and thus gets the maximum number of points available for that area.
In the second they except two, a 2% exception reporting rate, achieving 89/98 or 90.8%. The rub is this, both get the same maximum QoF points but the second practice has treated 6 more patients to target. So you can see why having a low exception report rate might be seen as a 'good thing' but does a low rate actually indicate a better quality of care?
Supposing in the two practices the maximum number of patients that could reasonably be excepted was 8 and 2 respectively. In this case the first practice failed to treat 9 (=92-83) to target of the patients it could have treated, as did the second (=98-89). So despite a fourfold variation in exception reporting rates they under-treat the same number of people. (The first has treated 6 more people to target but only because it was easier to do so; exception reported patients are frequently more complex or less compliant and so it is perfectly reasonable that they both get maximum QoF points in this area, despite the variation in exception reporting rate.)
But supposing both practices could have excluded 8 patients, then the second practice is performing better, it just didn't need to report as many to get maximum points, because once you have achieved maximum points there is no point in taking exception reporting further. The problem is you cannot know this just by looking at exception reporting rates; you need to know the number who could have been exception reported, and this latter figure is never assessed in QoF.
The point is both practices have at least met their contractual obligations to the same degree but that one might be over-performing, if its exception reporting rate could have been higher. So on their own exception reporting rates tell you nothing about quality of care.
One might argue that despite this, downward pressure on exception reporting will help keep coverage rates up. However, it cannot be assumed that this is necessarily a good thing as it may result in over-treatment which itself carries hazards we are increasingly aware of.
A better strategy to keep treatment rates high would be to increase the top of the QoF target range and this is what has happened as QoF had gone on. But even this may not be wise given the dis-benefits are bound to increase and as the above figures show the potential improvements are marginal.
In our example suppose you had to hit 95% to get maximum points. Our first practice would indeed have to boost performance from 83 to 88 patients as it could except no more; our second, if it could except more, could except 7 to achieve the 95% target. It's exception reporting rate will have jumped over threefold to 7% in response to the 90 to 95% change in top target range but its quality of care is no different; it's just making real its over-performance under the old 90% target. On its own exception reporting cannot be used to meaningfully compare practices or to assess a single practice over time when the target range or audit criteria have changed.
So the message is exception reporting is not a meaningful quality measure unless you also know the maximum potential exception reporting rate for each indicator in each practice. So please Area Teams and CCGs stop using it as such!
Maybe the worry is that some practices over-exception report merely to hit QoF targets and that patients for whom a care standard is appropriate are being denied it by being wrongly exception reported. If this is happening then this is a probity issue, not a quality of care one. CCGs, by all means query outliers in exception reporting rates and do some post payment verification. All practices should be recording reasons on the exception codes to justify them. But please drop the uninterpretable exception reporting rates from your quality dashboards and service specifications.
* a quality incentive scheme still responsible for a significant but shrinking proportion of GP remuneration where payments are made depending on the points scored