This site is intended for health professionals only

Should GPs be afraid of artificial intelligence?

Should GPs be afraid of artificial intelligence?

A deep contemplation of the NHS long-term workforce plan, in particular the future role of artificial intelligence, has led columnist Dr David Salkin to be at the cutting edge of medicine

I am not artificial. I know this, as I bled after cutting myself on a jagged tin of tuna last week. And I am certainly not intelligent, as I wouldn’t have gripped the lid with my fallible index finger and thumb in the first place.

But I am human, because to err is most definitely human, and it was the second time this month that I made such a dumb mistake. Perhaps that makes me doubly human?

If I am dumb (or at least occasionally so), and if I am capable of repeating the same mistake at least once, does that make me a second-class citizen compared with machine learning? And if so, what will the GMC of the future – the General Machine Council – make of me?

Are we in fact flawed as medical ‘practitioners’, condemned by our very job title and destined to always ‘practice’ and to never make perfect? That is, unlike our flawless artificial intelligence (AI) colleagues of the near future, who will soon inherit our medical earth?

We are often told that our lives will be transformed by AI. Last month, the NHS long-term workforce plan announced that it had a particular role for it – let’s just say that those of us who thought that ChatGPT meant a telephone chat with the GP may need to read the report more carefully. With talk of deploying robot receptionists, I can’t help but wonder how AI will impact primary care.

I imagine the future Euston Square tour, site of the RCGP Mausoleum. ‘Look children,’ says the robo-teacher to the startled class of 2050. ‘They used to expect humans to deliver healthcare in the early 21st century! No wonder they had such poor health outcomes! Can you imagine, they even used to allow people to do surgery – why, it’s practically inhumane!’

Of course, we won’t be alone. We will be joined in the dole queue by many others, such as the accountants who are unable to count as fast as robots. Lawyers will also swell our ranks, as there will be such a dearth of malpractice cases. Poor devils.

Then there’s the Government – don’t forget them – who will at first be over the moon. No pay disputes, clinicians working 24/7 without a whiff of complaint or burnout, and employment costs slashed to the bare bone. Until realisation sets in, albeit too late, that they themselves are now redundant. It will, at last, bring them down to earth.

And as for the patients themselves? Nearly forgot them, the selfish being that I am. My respiratory rate increases and pupils dilate at the very thought of the future: rapid diagnoses, impeccable management plans, laser precision surgery, a workforce that never tires and never puts a metallic foot wrong.

I start to feel guilty. Am I a 21st-century Luddite, wanting to stand in the way of inevitable progress despite the benefits that I know AI will bring our patients?

Time to microprocess my thoughts.

Human progress is inexorable – sometimes it is gradual, and sometimes it is giant leaps for mankind. Who would really want to stand in its way?

There will, of course, be threats: it will be our lives in their artificially intelligent hands, and our livelihoods steered by our own fallible hands and minds.

But do we really need to worry?

Ask yourself: one day, when your time comes to be a patient, who – or what – do you want to break bad news to you? Who – or what – do you want to share your intimate secrets with? How will your voice in the consultation be recognised for the fear or insecurity that it is really hiding?

Alternatively, ask yourself the John West question: do you want your physician to have at least once, preferably twice, cut their finger on a bloody tin of tuna?

Dr David Salkin is a GP in Leicester



Please note, only GPs are permitted to add comments to articles

Dylan Summers 11 July, 2023 3:09 pm

The main reason AI isn’t going to take over our jobs is liability. I don’t see software companies willingly accepting liability for medical errors. The role of AI will therefore be advisory – essentially yet another popup on Systmone.