Why patient surveys to assess GPs are flawed
The quality of the patient's experience has an impact on the effectiveness of medical treatment, so it's good news that the new GMS contract will include incentives to seek feedback from patients by means of surveys.
This is not a unique initiative. All NHS trusts are now required to survey their patients on an annual basis and the Commission for Health Improvement (CHI) has commissioned the development of questionnaires to obtain patient feedback for the full range of PCT services as well as acute and mental health trusts.
In addition the GMC is encouraging doctors to seek systematic feedback from patients as part of the revalidation process.
Unfortunately, these initiatives are proceeding in an unco-ordinated fashion and it looks as if a major opportunity to obtain feedback systematically across different levels of the service has been missed.
The GMS negotiators have announced that practices wishing to maximise their income by showing they can comply with the quality standards will have to use either the General Practice Assessment Survey (GPAS), the University of Manchester's adaptation of an American questionnaire, or the Improving Practice Questionnaire (IPQ), which was originally developed for the Royal Australian College of General Practitioners. Will these produce
useful results is the big question.
The two questionnaires are quite limited in their scope, focusing mainly on practice organisation and doctors' interpersonal skills. Apart from one very general and not particularly useful item in the IPQ, neither instrument covers screening or preventive advice, and there are no questions about disabled access, links to social care or support for carers. The IPQ contains no questions about nurses or care from other staff and the GPAS contains no questions about patient information.
Both GMS surveys measure patient satisfaction using Likert scales to get patients to rate aspects of their care. So for example, the IPQ asks patients to rate their level of satisfaction with the after-hours service (poor, fair, good, very good, excellent) and the GPAS asks them to rate the amount of time the doctor spends with them (very poor, poor, fair, good, very good, excellent).
Patient satisfaction is a slippery concept. Satisfaction ratings reflect three variables: the personal preferences of the patient, the patient's expectations, and the realities of the care received. When people are asked to rate their care on a scale it's impossible to disentangle the effects of expectations, experience and satisfaction.
This may not matter if you only want to monitor trends in your own practice, but if you want to benchmark your data against other practices it becomes more problematic. Socio-economic or cultural differences could affect the results, requiring complex statistical adjustments for valid comparisons.
Questions to elicit factual reports of patients' experience of care give more useful pointers to what needs to be changed than satisfaction ratings, so CHI's PCT surveys will measure patients' experience.
Despite these shortcomings, the GMS surveys could provide helpful feedback for practices, but only if they're administered much more carefully and systematically than is usually the case in general practice.
In an ideal world this type of evaluation should be carried out independently of the practice using questionnaires mailed to random samples of patients, with up to two reminders to ensure a response rate of at least 60 per cent.
Whatever happens, it will be essential to try to minimise bias. Handing out questionnaires to your favourite patients with a prompt to return favourable reports simply will not do, although if money or revalidation hangs on it there will be a temptation to do just this.
More fundamentally, carrying out the survey is not nearly as important as using the results to stimulate change. Practices should be rewarded for the actions they have taken to improve patients' experience, not just for dishing out questionnaires.