This site is intended for health professionals only


Are GPs’ jobs safe from artificial intelligence?

This week my attention was caught by this interesting Pulse article – ‘Artificial intelligence to replace call handlers in NHS 111 app‘.

Around 1.2 million patients in North London are to be offered assessment and advice via an artificially intelligent chatbot using anapp provided by Babylon, a private firm who already offer private video chat GP appointments for £25.

This news was met with sensible calls from medical and patient groups to ensure that the technology does not put patients at risk or overwhelm A&E and GP surgeries through inappropriate advice.

However, the news did get me wondering if we are on the cusp of a technological revolution in patient care, and if so, what does this mean for doctors?

Imagine what may (nearly) be possible…

  • The patient is able to open an app on their phone 24/7 and immediately talk to an AI chatbot doctor.
  • They receive effective advice about how to manage their problem.
  • Those not unwell are empowered to manage their own treatment.
  • Those in genuine need of medical assessment are directed to GPs and A&E departments where they can receive prompt treatment by staff no longer overwhelmed by other patients who don’t need to be there.

All this without expensive and troublesome staff in the loop. Capacity is limitless. It is cheap, freeing up resources to be spent elsewhere in the health service.

This sounds great! But then should doctors need to start worrying about computers pushing them out of the workforce?

AI in general practice

Professions like medicine usually feel secure in the face of technological change, but they could have cause to worry.

Artificial intelligence (AI) systems from Babylon and their competitors will be able to clock up the equivalent of many doctor’s career lifetime’s worth of consultations in a short period of time and use these data, perhaps cross referenced with medical records and outcomes, to refine and improve their performance.

There is some comfort however. Currently the indication is that for complex activities, such as playing chess or go, AI working alongside human experts outperform both humans or AI working alone. Humans and machines are thought to have complimentary skill sets.

But maybe AI needn’t work with GPs. Would a nurse or physician associate working with AI decision support do the job just as well as a GP?

Perhaps, but is the technology really ready?

I think that Babylon itself has some way to go with their AI chatbot. I’m sure they will deliver a good product in the end, but some (very rudimentary) testing I performed by presenting it with symptoms suggestive of typical GP problems received the following outcomes:

  • IBS – See GP in two weeks
  • Conjunctivitis – Error message
  • Mild gastroenteritis – Go to A&E
  • Simple soft tissue knee injury – Go to A&E

Any student who has sat through a morning clinic will know that there is an enormous, complicated and nuanced level of human interaction within a medical consultation. Familiarity, relatability and trust are important and also difficult to manufacture. It is hard to see how an AI will be able to pick up on subtle clues to domestic violence for example.

There are other questions to consider too, for example who is to blame when things go wrong?

There is a huge legal question mark over who is responsible for decisions taken by AI. If a self driving car causes an accident, who is responsible? The car owner, the manufacturer, the software provider? Until Medical AIs can take legal responsibility for their decisions and mistakes, they would struggle to displace doctors. In medicine someone needs to manage and be responsible for risk.

Currently, most medical systems claiming to use AI manage risk by avoiding decisions or provision of advice where risk is higher. Instead they either keep a human clinician in the loop to take responsibility or have a low threshold for directing patients to seek further assessment. To make a real impact on managing demand, AIs will need to learn to be able to make tough calls.

Dr Andrew Foster is a GP in Nottingham. This piece was originally posted on his blog www.avoidingpuddles.com. You can follow him on Twitter @drawfoster

Dr Mobasher Butt, chief medical director of Babylon Health responded:

I thought it would be most useful to internally test the cases that you gave above. This has interestingly given some different outcomes to the ones that you received. In summary, we believe all to be safe and accurate advice as follows:

  • IBS – routine GP (same)
  • Conjunctivitis – GP today
  • Mild Gastroenteritis – manage at home
  • Simple soft tissue knee injury – manage at home

The symptom checker was developed and validated by over 200 clinicians and we believe it to be one of the safest and most accurate symptom checkers of its kind globally. Our pre-launch research found it to be 17% more accurate than the senior, experienced emergency nurses and 13% more accurate than the juniors doctors we tested the symptom checker against. We fully acknowledge the role for human interaction in care delivery and feel the use of AI helps to significantly augment the human touch through freeing up precious time and resources to do so.