Almost everyone was surprised at the large amount of money attached to QOF when it came in 2004. The reason was fairly simple: GP recruitment in the late 1990s was poor and the Department of Health (DH) had accepted that GPs needed a substantial pay rise.
It was also the time when Tony Blair had announced a big increase in NHS funding, so there was money about. However, the Treasury weren’t going to give GPs something for nothing, and the ‘deal’ was that they’d get a big pay rise in return for improving quality.
The DH, of course underestimated how well GPs would do – unlike GPC who had always told the government that GPs would score highly from the start. However, the large amount of money that came with QOF had a number of perverse effects such as distracting GPs from important aspects of care that weren’t measured.
Although QOF was controversial at the beginning, there was remarkably little complaint about the actual indicators themselves. They were, by and large, things GPs thought they should be doing anyway. However, later on new indicators began to be introduced which had little professional support. Some of these were spectacular failures, of which two examples bear description.
The first was the use of PHQ-9 for depression using a standardised scale (PHQ-9), the rationale being that GPs underestimate the severity of depression. This indicator was very unpopular from the start, partly because producing a questionnaire half way through a consultation for depression often seemed inappropriate and intrusive. After a few years, the indicator was dropped, to be replaced by a less intrusive but meaningless one about bio-psychosocial assessments. This is now also to be dropped. The message here is that not everything that is important can be measured and you shouldn’t force indicators onto aspects of practice that aren’t easily measured.
A second indicator which went seriously wrong was the attempt to pay GPs on GP patient survey scores about getting appointments. Speedy access has never been high on GPs’ agenda, with doctors more concerned about issues such as continuity of care.
However, the misery was compounded by a misjudgement in the formula linking payments to survey scores which meant that there was large, random, year to year variation in payments, and a practice that had made considerable efforts to improve appointments could find that their payments the next year were reduced.
There was general agreement that QOF needed a major revamp, and it’s had a modest one, both the indicators and the amount of money attached to QOF.
The most important lesson learned is to stick to indicators which have clear professional support, and be careful how you link payment to performance, especially when it comes to surveys.
The overall value of QOF has now been reduced, which is a good thing. It was certainly too dominant in relation to the many ‘non-QOF’ things that GPs have to do. In my view, its value should come down further, until it’s only worth 10-15% of gross practice income. This will remove more of the perverse effects of QOF while still allowing some aspects of good care to be rewarded.
As QOF becomes smaller, it should become more flexible, with indicators and conditions being rotated. CCGs and LATs are also likely to introduced performance related incentives to meet local needs. Pay for performance isn’t on the way out, but what we have seen this month is a welcome reassessment of its place in general practice.
Professor Martin Roland is professor of health services research at Cambridge University and a part-time GP in Cambridge. He advised the government and BMA negotiating teams on the development of the original QOF from 2001 to 2003.