Clear ethical standards and guidance is needed for the use of artificial intelligence (AI) in healthcare settings or there is a risk of damaging trust between doctors and their patients, a report from the Council of Europe has warned.
There are several potential ways in which greater use of AI in health could impact a patients’ human rights and the doctor-patient relationship, the report concludes, including inequalities in access to healthcare.
Other problems with AI that need to be considered are transparency to both health professionals and patients, the risk of social bias in AI systems, dilution of the patient’s account of their health and the risk of automation bias, de-skilling, and displaced liability.
Report author Dr Brent Mittelstadt, director of research at the Oxford Internet Institute said he hoped it would make people think about how AI may disrupt the core processes involved in healthcare.
He has concerns that it could be used as a way to reduce budgets or save costs rather than improve patient care.
‘If you’re going to introduce new technology into the clinical space, you need to think about how that will be done.
‘Too often it is seen solely as a cost-saving or efficiency exercise, and not one which can radically transform healthcare itself,’ he said.
The report comes as a study found that AI has the potential to relieve pressures on the NHS and its workforce, but ‘frontline healthcare staff will need bespoke and specialised support before they will confidently use it’.
In the study, Health Education England and NHS AI Lab had also said there is a risk that AI could exacerbate cognitive biases and that clinicians may accept an AI recommendation uncritically because of time or other pressures.
The Council of Europe report advises that the use of AI remains ‘unproven’ and could undermine the ‘healing relationship’
‘The doctor-patient relationship is the foundation of “good” medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship.
‘The challenge facing AI providers, regulators and policymakers is to set robust standards and relationships for this new clinical relationship ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of AI,’ the report concluded.
Dr Mittelstadt also noted that it is not the patient’s vulnerability that is changed by the introduction of AI but the means of care delivery, how it can be provided, and by whom and that ‘can be disruptive in many ways’.
In addition to already widely recognised bias in AI systems because of the data they are based on, there are also issues around professional standards, in the event AI is used, the report said.
It adds: ‘If AI is used to heavily augment or replace human clinical expertise, its impact on the caring relationship is more difficult to predict.’