This site is intended for health professionals only

Ethical code needed before AI takes over more of doctor role, report warns


Clear ethical standards and guidance is needed for the use of artificial intelligence (AI) in healthcare settings or there is a risk of damaging trust between doctors and their patients, a report from the Council of Europe has warned.

There are several potential ways in which greater use of AI in health could impact a patients’ human rights and the doctor-patient relationship, the report concludes, including inequalities in access to healthcare.

Other problems with AI that need to be considered are transparency to both health professionals and patients, the risk of social bias in AI systems, dilution of the patient’s account of their health and the risk of automation bias, de-skilling, and displaced liability.

Report author Dr Brent Mittelstadt, director of research at the Oxford Internet Institute said he hoped it would make people think about how AI may disrupt the core processes involved in healthcare.

He has concerns that it could be used as a way to reduce budgets or save costs rather than improve patient care.

‘If you’re going to introduce new technology into the clinical space, you need to think about how that will be done.

‘Too often it is seen solely as a cost-saving or efficiency exercise, and not one which can radically transform healthcare itself,’ he said.

The report comes as a study found that AI has the potential to relieve pressures on the NHS and its workforce, but ‘frontline healthcare staff will need bespoke and specialised support before they will confidently use it’.

In the study, Health Education England and NHS AI Lab had also said there is a risk that AI could exacerbate cognitive biases and that clinicians may accept an AI recommendation uncritically because of time or other pressures.

The Council of Europe report advises that the use of AI remains ‘unproven’ and could undermine the ‘healing relationship’

‘The doctor-patient relationship is the foundation of “good” medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship.

‘The challenge facing AI providers, regulators and policymakers is to set robust standards and relationships for this new clinical relationship ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of AI,’ the report concluded.

Dr Mittelstadt also noted that it is not the patient’s vulnerability that is changed by the introduction of AI but the means of care delivery, how it can be provided, and by whom and that ‘can be disruptive in many ways’.

In addition to already widely recognised bias in AI systems because of the data they are based on, there are also issues around professional standards, in the event AI is used, the report said.

It adds: ‘If AI is used to heavily augment or replace human clinical expertise, its impact on the caring relationship is more difficult to predict.’


Douglas Callow 20 June, 2022 3:20 pm

AI Evolution Needs Humans
“As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.”

Amit Ray, Famous AI Scientist, Author of Compassionate Artificial Intelligence
Predicting Singularity
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold.”

Ray Kurzweil, American inventor and futurist.
On AI’s Evolution
“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Stephen Hawking, BBC
“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”

Alan Turing
On AI’s Lack Of Emotion
“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”

Elon Musk, Technology Entrepreneur, and Investor.
Calling For AI Regulation
“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”

Gray Scott
On Building A Better World With AI
“We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.”

Andrew Ng, Co-founder and lead of Google Brain
“Robots are not going to replace humans, they are going to make their jobs much more humane. Difficult, demeaning, demanding, dangerous, dull – these are the jobs robots will be taking.