This site is intended for health professionals only


CQC reviewing practices’ ‘risk ratings’ data as inaccuracies emerge

Exclusive The CQC is in talks with NHS England and the GPC around reviewing the data used by the regulator for its ‘risk ratings’ of GP practices, as GPs claim that the data used is inaccurate, Pulse can reveal.

Pulse has uncovered claims that practices were being marked down as being ‘risky’ because of problems with the interpretation of data, including a failure to acknowledge practices had stopped doing QOF work on NHS England advice, calculations being based on practice lists of zero patients and problems with the GP Patient Survey.

The CQC has said it is reviewing the indicators used alongside NHS England to see how they can be ‘refined’, while the GPC has said the ratings are ‘not fit for purpose’, and said it is working with the regulator to see whether they should continue in their current state.

The review by the three bodies comes after the inspection body has been under fire for its decision to publish GP risk ratings, which are used to prioritise practices for inspection, despite warnings from GP leaders that the data should be kept as an internal tool.

The regulator uses data from practices’ QOF performance, the GP Patient Survey, and other sources to highlight potential areas where there is a risk of poor care. Based on their scores in these 38 indicators, they are giving an overall rating of 1 – meaning very risky – to 6, the lowest risk rating.

But a spokesperson for the CQC has told Pulse that it had ‘concerns’ that they were being seen as ‘judgements’, and it was looking into how the indicators can be refined.

It comes as Pulse has heard multiple complaints about the validity of the CQC data, including:

  • One LMC writing to the chief inspector of general practice Professor Steve Field stating that saying the ratings are punishing their practices for ditching QOF indicators on NHS England advice;
  • The practice of a former LMC leader being labelled as a high priority for inspection because of their red ‘elevated risk’ rating for unplanned admissions, which was based on ‘statistically impossible’ data;
  • A practice having their patient survey scores were reduced by 34% because they had a high proportion of respondents who said the question ‘doesn’t apply’ to them.

It is the latest blow to the ratings system, launched by the CQC last week, which was immediately hit with a backlash from GPs astonished at how they could be given a risk rating without even being visited by the CQC, and the situation became ever more incendiary when the Government said it would use the risk scores on its new TripAdvisor-style NHS comparison website aimed at patients.

A CQC spokesperson said: ‘We have had many responses to the first publication of our intelligent monitoring data for general practice. A concern raised with us is that the bandings are being reported as judgements and ratings. We share this concern – the bandings help prioritise practices for inspection and are not a judgement about care quality. Judgement and ratings only come after inspection, drawing upon what we see and hear as well as the available data.

‘Intelligent monitoring clearly shows that the vast majority of practices are of low concern based on the available data. We are also pleased to announce we will be publishing a further batch of inspector reports early next week that show GP services delivering excellent care.

‘We are currently reviewing all feedback we have received and together with NHS England looking at how we can further refine indicators. Any changes we make will be reflected on our website and any practice whose banding changes such that they become a priority for inspection will be notified in advance.’

GPC deputy chair Dr Richard Vautrey told Pulse: ‘We [the GPC and CQC] have also agreed to set up a group together with the RCGP to review both the data being used and how it is used.

‘The intention is to have a fundamental look at what could be used by CQC rather than the current indicators and ranking which is not fit for purpose.’

The concerns about the inaccuracies were raised by practices after they realised they were being marked down for inaccurate data.

Dr Mark Sanford-Wood, medical secretary of Devon LMC – who wrote to Professor Field – said the CQC was ‘unfairly flagging many of those practices as being high risk, when they are nothing of the sort’.

He claims practices in Devon and Cornwall have found themselves marked down for their low QOF scores towards the end of the 2013-14 financial year, despite NHS England signing off a CCG-led initiative for them to stop reporting on QOF indicators that were scrapped under this year’s GP contract.

This is in in contrast to neighbouring Somerset, where practices were not given risk ratings having signed up to a local incentive scheme from January.

Dr Sanford-Wood wrote: ‘Sadly, your methodology is now unfairly flagging many of those practices as being high risk, when they are nothing of the sort. You are clearly not aware of the agreement reached in Devon otherwise I am sure you would have compensated for this, but it remains of great concern that a national programme such as this could be launched when so badly flawed by something so obvious.’

Former Sefton LMC chair Dr Andrew Mimnagh said that data for his practice showed a ‘statistically impossible’ level of accident and emergency admissions.

He said: ‘The partners are looking at how our 6,800 patients managed to rack up 150,000 AED attendances, given we have only 11 frequent flyers with more than 10 AED attendances per annum.

‘We’d have to have someone who’s basically living there, being readmitted every day… we’ve been monitoring the frequent fliers and attenders and it’s just statistically impossible to be racking up that level of attendances.’

Matthew Epton, a GP practice manager in Oxford claims his practice was marked down because of supposed poor scores from the GP Patient Survey, which marked ‘not applicable’ responses as negative responses.

For example, the CQC said that only 60% of 234 patients said that the last nurse they saw or spoke to was good at giving them enough time, he said.

However, only one of these patients had marked the practice down as poor, eight as ‘neither good nor poor’, while 84 had said the question was not applicable.

Mr Epton explained: ‘The 84 patients that have clicked doesn’t apply probably did so because they didn’t see a nurse on that visit or have never seen a nurse. The correct result should have been 94%. This is a massive difference and makes our practice look relatively poor and damages our reputation.’

The GPC, who met with the CQC after seeking legal advice on the damage caused to practices’ reputations, have also insisted the regulator must work with the media, and particularly local press, to counter erroneous reports that practices are putting patients at risk.