This site is intended for health professionals only


Artificial intelligence ‘has potential’ to relieve pressures on NHS workforce, finds study

Artificial intelligence ‘has potential’ to relieve pressures on NHS workforce, study finds

Artificial intelligence (AI) has the potential to relieve pressures on the NHS and its workforce, but ‘frontline healthcare staff will need bespoke and specialised support before they will confidently use it’, a study has found. 

A new report, Understanding healthcare workers’ confidence in AI, published by Health Education England (HEE) and NHS AI Lab this week, recommends educational pathways and materials for all healthcare professionals ‘to equip the workforce to confidently evaluate, adopt and use AI’. 

HEE said the NHS ‘aims to be a world leader in the use of emerging technologies like AI’ to help ‘address the backlog in elective procedures’.

However, HEE found the vast majority of clinicians are unfamiliar with AI technologies’ and ‘there is a risk that, without appropriate training and support, patients will not equally share in the benefits offered by AI’.

Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab, warned that ‘we must also be mindful that AI could exacerbate cognitive biases when clinicians are making decisions about diagnosis or treatment’. 

A potential hazard is also that ‘a clinician may accept an AI recommendation uncritically, potentially due to time pressure or under-confidence in the clinical task,’ HEE warned. 

Ms Balaram said: ‘AI has the potential to relieve pressures on the NHS and its workforce; yet, we must also be mindful that AI could exacerbate cognitive biases when clinicians are making decisions about diagnosis or treatment. It is imperative that the health and care workforce are adequately supported to safely and effectively use these technologies through training and education. 

‘However, the onus isn’t only on clinicians to upskill; it’s important the NHS can reassure the workforce that these systems can be trusted by ensuring we have a culture that supports staff to adopt innovative technologies, as well as appropriate regulation in place.’ 

The report argues that how AI is governed and rolled out in healthcare settings can affect the trustworthiness of these technologies and confidence in their use, including factors such as ‘clear nationally driven regulation and standards’.

A second report from this research, to be published later this year, will outline suggested pathways for related education and training. 

The report and partnership with HEE is part of the NHS AI Labs’ AI Ethics Initiative, which was introduced to support research and practical interventions that can strengthen the ethical adoption of AI in health and care. 

In 2020, scientists used artificial intelligence to discover a new antibiotic that kills treatment-resistant diseases. 

READERS' COMMENTS [2]

Dylan Summers 2 June, 2022 9:47 am

I’m no luddite but I find it hard to believe that AI is going to have much role in general practice.

I understand that AI has had good results in interpreting radiology images. This is a process where 1) no decision needs to be made on what data to include (the whole image is processed) and where 2) complex narrative and psychosocial factors are excluded.

Compare to a consultation about Mrs Smith’s 20 year history of abdominal pain, extensively investigated but worse since her recent move to a care home. What data from that narrative could you usefully feed into an AI algorithm?

Dylan Summers 2 June, 2022 10:04 am

(In fact we’re using AI in our “klinik” online triage platform – all requests get scrutinised by a human, but the AI prioritises some as needing human scrutiny more urgently than others. Moderately useful but hardly a revolution in general practice.)