Cookie policy notice

By continuing to use this site you agree to our cookies policy below:
Since 26 May 2011, the law now states that cookies on websites can ony be used with your specific consent. Cookies allow us to ensure that you enjoy the best browsing experience.

This site is intended for health professionals only

At the heart of general practice since 1960

Are GPs’ jobs safe from artificial intelligence?

Letter from Dr Andrew Foster

This week my attention was caught by this interesting Pulse article – ‘Artificial intelligence to replace call handlers in NHS 111 app‘.

Around 1.2 million patients in North London are to be offered assessment and advice via an artificially intelligent chatbot using anapp provided by Babylon, a private firm who already offer private video chat GP appointments for £25.

This news was met with sensible calls from medical and patient groups to ensure that the technology does not put patients at risk or overwhelm A&E and GP surgeries through inappropriate advice.

However, the news did get me wondering if we are on the cusp of a technological revolution in patient care, and if so, what does this mean for doctors?

Imagine what may (nearly) be possible…

  • The patient is able to open an app on their phone 24/7 and immediately talk to an AI chatbot doctor.
  • They receive effective advice about how to manage their problem.
  • Those not unwell are empowered to manage their own treatment.
  • Those in genuine need of medical assessment are directed to GPs and A&E departments where they can receive prompt treatment by staff no longer overwhelmed by other patients who don’t need to be there.

All this without expensive and troublesome staff in the loop. Capacity is limitless. It is cheap, freeing up resources to be spent elsewhere in the health service.

This sounds great! But then should doctors need to start worrying about computers pushing them out of the workforce?

AI in general practice

Professions like medicine usually feel secure in the face of technological change, but they could have cause to worry.

Artificial intelligence (AI) systems from Babylon and their competitors will be able to clock up the equivalent of many doctor’s career lifetime’s worth of consultations in a short period of time and use these data, perhaps cross referenced with medical records and outcomes, to refine and improve their performance.

There is some comfort however. Currently the indication is that for complex activities, such as playing chess or go, AI working alongside human experts outperform both humans or AI working alone. Humans and machines are thought to have complimentary skill sets.

But maybe AI needn’t work with GPs. Would a nurse or physician associate working with AI decision support do the job just as well as a GP?

Perhaps, but is the technology really ready?

I think that Babylon itself has some way to go with their AI chatbot. I’m sure they will deliver a good product in the end, but some (very rudimentary) testing I performed by presenting it with symptoms suggestive of typical GP problems received the following outcomes:

  • IBS – See GP in two weeks
  • Conjunctivitis – Error message
  • Mild gastroenteritis – Go to A&E
  • Simple soft tissue knee injury – Go to A&E

Any student who has sat through a morning clinic will know that there is an enormous, complicated and nuanced level of human interaction within a medical consultation. Familiarity, relatability and trust are important and also difficult to manufacture. It is hard to see how an AI will be able to pick up on subtle clues to domestic violence for example.

There are other questions to consider too, for example who is to blame when things go wrong?

There is a huge legal question mark over who is responsible for decisions taken by AI. If a self driving car causes an accident, who is responsible? The car owner, the manufacturer, the software provider? Until Medical AIs can take legal responsibility for their decisions and mistakes, they would struggle to displace doctors. In medicine someone needs to manage and be responsible for risk.

Currently, most medical systems claiming to use AI manage risk by avoiding decisions or provision of advice where risk is higher. Instead they either keep a human clinician in the loop to take responsibility or have a low threshold for directing patients to seek further assessment. To make a real impact on managing demand, AIs will need to learn to be able to make tough calls.

Dr Andrew Foster is a GP in Nottingham. This piece was originally posted on his blog www.avoidingpuddles.com. You can follow him on Twitter @drawfoster

Dr Mobasher Butt, chief medical director of Babylon Health responded:

I thought it would be most useful to internally test the cases that you gave above. This has interestingly given some different outcomes to the ones that you received. In summary, we believe all to be safe and accurate advice as follows:

  • IBS – routine GP (same)
  • Conjunctivitis – GP today
  • Mild Gastroenteritis – manage at home
  • Simple soft tissue knee injury – manage at home

The symptom checker was developed and validated by over 200 clinicians and we believe it to be one of the safest and most accurate symptom checkers of its kind globally. Our pre-launch research found it to be 17% more accurate than the senior, experienced emergency nurses and 13% more accurate than the juniors doctors we tested the symptom checker against. We fully acknowledge the role for human interaction in care delivery and feel the use of AI helps to significantly augment the human touch through freeing up precious time and resources to do so. 

 

Rate this article  (5 average user rating)

Click to rate

  • 1 star out of 5
  • 2 stars out of 5
  • 3 stars out of 5
  • 4 stars out of 5
  • 5 stars out of 5

0 out of 5 stars

Readers' comments (6)

  • I think augmented humans are the answer, and indeed we already are augmented.

    Unsuitable or offensive? Report this comment

  • The problem is that patients often have:

    Limited cognitive function
    Misunderstanding symptoms e,g TATT, vague pains, confusing SOB with chest pain,..
    Diagnosis, even by an experienced GP, over the phone or face to face can still be difficult
    Lack of personal knowledge of the patient
    Patients prefer to be reassured by a 'clinician' that they trust, rather than an automated 'bot'
    Bots resorting to low thresholds for referral for safety reasons

    'Instant Medical History' is the best software for obtaining a very detailed medical history and could be valuable for doing 'just this'

    Unsuitable or offensive? Report this comment

  • The main problem with robots is they don't buy stuff or go shopping.

    Unsuitable or offensive? Report this comment

  • *

    If you think GP is simply a matter of identifying a diagnosis and implementing the appropriate protocol then AI is obviously going to be big news sooner or later. What better way to make use of all the hard work NICE has been doIng in writing down exactly what is required to fix everybody in the correct way.

    Great if every patient is an expert in data input and can type nice and quick and/or speak in machine codable English.

    Problem is many people don't know what's up with them, plenty of folk lie and game you to get what they want, or lie to them selves to avoid the truth. People joke and cry and shit them selves too, some folk barely say a word.. I bet that isn't easy to code.

    It'll be cheap though so if the NHS is still around naturally the NHS will make massive use of it. Will it work ...I think not ...not for a long time. But it'll be introduced way before it works well. Heck aren't they using in at 111 now? Since when did some thing have to actually work before some jerk politician insisted it was introduced country wide immediately? If it's cheap and even works a third they'll use it.

    Unsuitable or offensive? Report this comment

  • AI will help, no question about that. Like in all things, progress is science based and AI will come along and sit beside physicians and diagnose and suggest and make things easier.
    This is certain like cockpits and automatic pilots. But we still need the human for some time yet.

    Unsuitable or offensive? Report this comment

  • AI robot is all well and good, but what about asking about gambling habits? And screening for radicalisation? And help with boilers?

    Sorry, just don't see we will ever have a world where the phrases 'AI robots are ideally placed to...' and 'You need to go and see your AI robot' take off.

    Unsuitable or offensive? Report this comment

Have your say