This site is intended for health professionals only

At the heart of general practice since 1960

QOF performance 'linked with choice of IT system'

The choice of clinical computing system has been linked with a larger effect on QOF performance than any other practice characteristic, conclude UK researchers.

The study of over 8,000 practices in England using the seven most popular GP clinical computing systems in use between 2008 and 2011 found differences in QOF performance - worth up to around £600 a year to the average practice - linked with the choice of computer system.

For the study, published in BMJ Open this month, researchers looked at QOF achievement, exception and prevalence data over the period 2008 to 2011 and then looked at the association of these with practice characteristics and the GP computer system used.

The researchers acknowledged absolute differences in QOF performance were ‘very small’, but said that the choice of clinical computing system had a bigger effect on QOF performance than any other practice characteristic.

‘The association between clinical computing system and QOF performance was stronger than for any other patient or practice characteristic, including list size, proportion of patients over the age of 65 and local area deprivation,’ the team wrote.

For example, performance differed by as much as 2.7% between computing systems, but only 0.14% at most per 1,000-patient increase in list size, and by up to 1.6% between the least and most deprived areas.

After adjusting for practice characteristics, such as list size, local area deprivation and rural location, the iSoft systems Synergy and Premiere were associated with the best QOF performance, with practices using Synergy predicted to achieve an average of £602 more a year, and those using Premiere £563 a year, compared with practices using EMIS’s PCS.

However, the researchers did point out that the PCS system had now been superseded by the EMIS Web system.

The team from the University of Manchester and University of York concluded: ‘In the UK, performance on the QOF, the world’s largest health-related pay-for-performance scheme, is partly dependent on the clinical computing system used by practices. The raises the question of whether particular characteristics of computing systems facilitate higher quality of care, better data recording or both.

‘This question is of interest to clinicians and to policy makers, for whom this work highlights an inconsistency across clinical computer systems which needs to be understood and addressed.’

Kathie Applebee, chair of the National Vision User Group, commented: ‘I think that the ability to tailor Vision and the late lamented Premiere and Synergy, and the fact that so many of their users were early adopters of GP IT (from VAMP and Abies respectively), may be influential. Both companies developed systems to ensure that high quality could be collected and this was good preparation for QOF.’

‘In my opinion, practices that exploit systems by being able to tailor them to meet their own needs probably benefit,’ she added.

Dr Grant Ingrams, GPC IT subcommittee member and a GP in Coventry, said the differences were small but the study was interesting and should get software suppliers looking at how they can improve things.

He said: ‘Often it’s about how well you’re able to capture the data and recognise missing data – the easier the system makes it with pop-up reminders the easier it will be. It would be interesting to do this sort of analysis to see which system allows people to pick up new indicators quicker.’ 

But Dr Shaun O’Hanlon, clinical and development Director for EMIS, said that the analysis was ‘flawed’

He said: ‘We therefore cannot be confident of its conclusions. It is disappointing that the figures do not include EMIS Web, which is now the most widely used GP clinical system and has a number of innovative tools to optimise QOF performance.’

Dr Paul Cundy, chair of the GPC’s IT subcommittee, said the differences uncovered in the study would have had little impact on practices’ relative incomes.

Dr Cundy said: ‘The spread of achievements said to be due to the systems the practices were using was 1% to 2% against an average achievement of 90%; if this variation is due to the computer systems it is very small and translates to very small sums of money.’

BMJ Open 2013; online 2 August

Readers' comments (4)

  • This research makes no mention of which software was actually used to identify and gather QOF information. As many practices choose to use the Informatica systems (Contract+ especially) there is every likelihood that a comparison of QOF points against basic practice system will be meaningless.

    Unsuitable or offensive? Report this comment

  • I agree with the above comment, the comment factor in all iSOFT systems is that they all have access to Contract+ from Informatica which helps collect the QOF data

    Unsuitable or offensive? Report this comment

  • *common

    Unsuitable or offensive? Report this comment

  • We also have many concerns regarding this paper in addition to those already raised by Mary Hawking.

    1. The authors have undertaken a cross-sectional study - this is weakest study design and the results of such studies can only ever signal associations rather than attribute causality. Limited information is provided on the ‘adjustments’ made in the analyses meaning the reader cannot be confident that any of the tiny differences relate to the computer system rather than population characteristics (eg socio-demographic, lifestyle or levels of morbidity) or unmeasured confounding factors.
    2. The authors have used multiple statistical testing to compare results by computer system. Whilst they have selected a p value of 0.05 (which is arguably too lenient given the number of tests undertaken), they fail to mention what constitutes a clinically/practically important difference. The paper is then written as if they found important differences in outcomes by GP computer system when in reality the differences are negligible (something which was also pointed out by the peer reviewer). There is also an assumption that the QOF measures quality of care which is debatable.
    3. The authors have undertaken a study about GP computer systems and have failed to spot there is no system called ProdSysOneX. This demonstrates a surprising lack of knowledge about General Practice by the researchers and a potential lack of documentation/validation by the HSCIC. It also suggests authors didn’t verify the data obtained from the HSCIC on receipt. Simple measures could have prevented this before publication including a google search, visit to the TPP website, email to system suppliers or discussion with any practicing GP. Perhaps it should also have been picked up during the peer review process?
    4. The errors in table 3 are particularly noticeable as it states that the predominant system in use in London is ProdSysOneX. This is incorrect since EMIS was and is the predominant supplier in London. The authors have tried to correct their mistakes by relabeling the columns and publishing a revised paper on the Manchester university website. However this still has very basic errors with inconsistencies between the text and the tables: the text of the results states that LV is used by 24.8% of practices in London and 50.5% of practices in the South East whereas the revised table states LV is used in 50.5% of London and 36.8% of the South East. How can the reader be confident that these errors do not affect the multilevel analysis which includes SHA as a level?
    5. The supplementary tables had whole sections where the data appear to be missing – is this intentional?
    6. The authors have arrived at a key message “Researchers that utilise primary care databases, which collect data from a single clinical system need to be cautious when generalizing their findings to all English practices”. Whilst issues of generalisability of research findings are important, this was not the aim of the paper, the study was not designed to demonstrate this and no results have been presented to support the authors key message.
    7. Many of the higher ranked systems rely on 3rd party add-ons to optimise their QOF figures. More modern systems (for example EMIS Web) have new features designed to improve clinical care and outcomes for patients as well as enable practices to maximise their ‘QOF points’.


    authors:
    Julia Hippisley-Cox, Professor of Clinical Epidemiology & General practice, University of Nottingham, Director of QResearch
    Shaun O’Hanlon Clinical& Development Director EMIS, Director QResearch

    competing interests
    SOH is a director of EMIS – leading commercial supplier for GP computer systems in the UK. Both authors are directors of QResearch (www.qresearch.org) – a research database and joint not-for-profit partnership between EMIS and the University of Nottingham

    Unsuitable or offensive? Report this comment

Have your say