Map of Medicine medical director, Dr Pritpal Tamber, on how their quest to produce maps showing best value care uncovered a worrying lack of evidence in this field
Until about a year ago our focus was to create maps that showed clinicians how to deliver best care.
But with the arrival of QIPP in the wake of the financial crisis, our clients quickly began to ask for evidence on the most cost-effective approach to deliver services.
Responding to this demand has been quite a challenge, as there is actually a worrying lack of high-quality evidence on cost-effectiveness.
There is a lot of information out there. There’s guidance from the Department of Health, national audits, best practice statements from august bodies, local case studies, clinical practice guidelines, and, of course, raw data from the NHS Information Centre. We decided to pull out the value messages from all of these sources, focussing on COPD and diabetes as test-runs.
The resulting diabetes document was over 50 sides long; COPD was a little shorter but not by much. We felt they were too long to be useful so we decided to try to identify the most high-quality, reliable information – the kind of messages that can be applied in any health community.
We decided to ascertain whether the sources were reliable. We looked at whether we believed how they were put together. One of the first things that struck us was that many of the sources did not name the authors or declare their conflicts of interest. These we excluded immediately.
A large number of the remaining sources were based on the opinions of groups of specialists. There are structured methods for how people’s opinions can be harnessed in a manner that reduces the likelihood that one person influences the rest. What we found in a lot of the information was that there was no description of how the views of the specialists were harnessed. Without such background it’s hard to know whether the information truly represents the views of the group. These sources, therefore, were also excluded.
We were left with a much smaller pool of information, but we still wanted to ensure that we had a robust and repeatable methodology for selecting the messages that we would add to our care maps.
We then realised that many of the messages we were left with were hopeful in nature – they were penned in the belief that they would reduce the cost of care without there being a robust study to prove it. The classic example of this is prevention. There are many voices urging us to invest in prevention; the more ill-health we prevent, the lower the health care costs. It sounds instinctively right, but we found very little evidence to prove it. Another example of this is the belief that moving more specialist services into the community will reduce overall costs – again, there is very little evidence to back this up.
After removing the ‘hopeful’ messages, our final assessment was to check if the remaining interventions not only saved money, but also maintained the quality of care (or improved it). Anything that either reduced the quality of care, or did not explicitly measure the impact of the cost-saving measure on the quality of care, we excluded.
The final pools of information were intriguingly small. For diabetes, having started with 50 sides of information, the final document has 10 messages that barely cover two sides. COPD has only seven messages, heart failure has five.
The sheer lack of high-quality evidence on cost-effectiveness is deeply worrying, and our experiences should inform a research agenda. But by using the resulting documents – we call them Productivity Considerations for Service Design – at least a local health community can start from a small list of robustly-proven priorities rather than rely on ‘hopeful’ messages or messages put together in a methodologically questionable manner.