When AI in healthcare becomes autonomous, who is responsible?

AIs in healthcare are constantly being deployed in new fields. The question is now: Will AIs end up replacing the physician?

"AI is not perfect, doctors are not perfect [...] it will make errors"

Dr. Michael Abramoff

Whether one is sceptical or not, it is clear that AI is inexorably being deployed in the field of healthcare. Until recently, AIs in healthcare were only excellent image analysers. I won't go into detail about their value in imaging, dermatology and pathology.1 Let's be simple: AIs' pixel-by-pixel analysis knocks out the human eye in the first round.

AIs in healthcare are constantly being deployed in new fields: clinical diagnosis, prognosis, therapy. Let's face it, just ten years ago we would never have expected so much progress. From now on, the question is: Will AIs end up replacing the physician? Faced with medical deserts, it is tempting to use machines to "automate" certain medical and paramedical functions. In China, more than half the population says it is ready to replace its general practitioner with an AI. In Europe, the younger generations, born in a digital world (the famous Generation "Y" and "Z"), will no doubt have fewer scruples about abandoning the good old "family doctor" for the machine.

We are far from it. AIs have not replaced physicians and will probably never completely replace them. Nevertheless, our profession is in danger of changing, with an increasing number of tasks being entrusted to AIs, even if they do them autonomously. But in case of error, who will be responsible?  

AI in healthcare and legal liability, Europe is not ready

2031. Young physicians are in short supply, as the COVID-19 health crisis of a decade ago has broken vocations. This shortage has led most of the so-called "peripheral" hospitals to opt for the deployment of AIs to make diagnoses in the emergency room. A patient admitted for respiratory distress suffers from pneumopathy according to the AI... which turns out to be a pulmonary embolism. A few days in the ICU, a big scare, and for the relatives a legitimate anger. Who will they sue? The hospital? The company that created the AI? The millions of patients who provided data to this AI? The state?

French laws on the subject are dictated by European legislation:2

"When, for preventive, diagnostic or therapeutic acts, the health professional envisages the use of algorithmic processing, he shall inform the patient in advance and explain to him in an intelligible form how this processing would be implemented with regard to him. Only in cases of urgency and impossibility of information can this be prevented. No medical decision may be taken solely on the basis of algorithmic processing.”3

Europe is clearly in favour of a very controlled use of AI in health, perceived as an aid to physicians. The health professional remains at the heart of the medical and legal system, and it is the responsibility of the human being and not of the machine in case of medical error. This may sound reassuring for the patient - and it is - but in the long run I think that these directives will be a hindrance to the good development of the European healthcare system. From my point of view as a physician, this law bypasses the problem and does not answer the main questions. For example: Why should I be held accountable in court for acts that I have delegated to an AI when my hospital has imposed the use of this AI, the functioning and relevance of which I do not know precisely?

Another shortcoming is the lack of a quality label for all these AIs. Many start-ups are attacking the market of health AI, but what are they really worth? What is the quality of their data? With this kind of legislation, it is the user who is penalised and not the manufacturer. Finally, the law does not yet mention the AIs that will be used in the absence of a physician. However, this technology is already deployed in the United States, as we will come back to, and could quickly become established in France. Concerning the reflection on the legal aspects of AI in health, Europe is clearly lagging behind.

The Cnom seems to be more in tune with reality in its recommendations.4 It would like to see an examination of the legal regime of responsibilities: That of the physician in his use of decision support, but also that of the designers of algorithms as to the reliability of the data used and the methods of their computerised processing. Following the example of bioethics laws, should we not now draw up a techno-ethical legislative framework?

Autonomous AI, the American example

In the USA, the use of medical diagnostic systems is being boosted by both the federal authorities and insurers. The Food and Drug Administration (FDA) took the plunge in 2018, granting marketing authorisation to the IDx-DR.5

This device, scientifically validated by a clinical trial,6 uses AI to autonomously detect diabetic retinopathy. In just one minute the device captures images of the eye after retinal dilation and sends them to an algorithm-based centre for analysis. If diabetic retinopathy is detected - and only if this is the case, as the AI is a single-tasker - the patient is referred to an ophthalmologist. If not, he or she is rechecked in one year. For the FDA, the aim was clearly to alleviate the lack of specialists in medical deserts.

To date, there has been a single case of error: The machine failed to detect retinal cancer in a patient. However, neither the system nor its designers were legally "attackable", as its function was limited to screening for diabetes complications.

If a mistake is made, whose fault is it?

While an AI-based system is autonomous, there are several sources of error.7 Is the data used for deep learning reliable and in sufficient quantity? Is the patient data correctly collected and integrated? Is the algorithm reliable? Is the user interface functional (risk of misuse, faulty display of results, etc.)? In the end, there are only two types of possible errors: A "false positive" and a "false negative". The first case is acceptable, in that the patient will then consult an ophthalmologist who will invalidate the result. But in the second case? It should be remembered that with this system there is no double check.

The trial that allowed the IDx-DR to be marketed involved 900 people with diabetes. The system correctly identified the presence of mild diabetic retinopathy in 87.4% of cases, and the absence of mild diabetic retinopathy in 89.5% of cases. So we are far from 100% reliability. We should not forget that the continuous addition of new data will improve these results. If this error rate seems high, it must of course be compared with that of a "human" physician.

Why should a machine be expected to produce better results than a physician? For the latter, a diagnostic error is not necessarily a fault, as long as he has made his diagnosis "with the greatest care, devoting the necessary time to it, using as far as possible the most appropriate scientific methods and, if necessary, appropriate assistance".9 Obligation of means, therefore, and not of result.   

We can see that, depending on the case, responsibility may fall to a whole chain of people, from the designer to the user, via the manufacturer, the service provider and the prescriber. When Dr. Abramoff, designer of the IDx-DR, declared in an interview with the Washington Post8 "AI is not perfect, doctors are not perfect (...) it will make mistakes", he also specified "The AI is responsible, and therefore the company is responsible (...) We have malpractice insurance".

One thing is certain, long procedures are foreseeable, fuelled by the illusion of an infallible technology. On the other hand, the non-use of these new technologies by a physician or a health establishment could also be considered as a loss of opportunity, and could justify legal action by certain patients.    

Faced with technologies that remain obscure and sources of fantasy, the key here seems to be the information given to the patient... and the physician. If the patient is properly informed about the functioning of the system and the possibility of errors, it will be up to him to accept a technology using AI or not. As for physicians, they will very quickly need access to real training on these technologies, whether to be able to explain them to patients or to develop their own critical thinking. Good reforms to implement in the faculties...

On a legal level, the debates around AI in healthcare will quickly become legion. The European Union should therefore urgently target the right issues. Who will decide on future disputes related to the use of autonomous AI? Who will be in charge of this type of necessarily complex and multifactorial expertise?     

Our "American example" shows the possibilities opened by the thoughtful and appropriate use of an autonomous AI participating in a public health mission. Subject to irrefutable scientific validation, should European laws not authorise the autonomous use of AI in certain very defined situations?

Joris Galland is a specialist in internal medicine. After having practiced at the Lariboisière hospital (AP-HP) he joined the Bourg-en-Bresse hospital.

(Article written in collaboration with Benoît Blanquart)

References:
1. Zemouri, R., Devalland, C., Valmary-Degano, S., & Zerhouni, N. (2019). Intelligence artificielle : Quel avenir en anatomie pathologique? Annales de Pathologie, 39(2), 119–129.
2. Résolution du Parlement européen du 12 février 2019 sur une politique industrielle européenne globale sur l’intelligence artificielle et la robotique
3. Élargir l'accès aux technologies disponibles sans s'affranchir de nos principes éthiques (Projet de loi sur la bioéthique, adopté par l'Assemblée nationale en nouvelle lecture le 9 juin 2021).
4. CNOM – Médecins et patients dans le monde des data, des algorithmes et de l'intelligence artificielle (2018)
5. IDx - DR
6. van der Heijden, A. A., Abramoff, M. D., Verbraak, F., van Hecke, M. V., Liem, A., & Nijpels, G. (2017). Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System. Acta Ophthalmologica, 96(1), 63–68.
7. Laurène Mazeau – «Dispositif de diagnostic médical sans supervision d’un médecin. Explicabilité et responsabilité» Cahiers Droit, Sciences & Technologies (2019) – https://doi.org/10.4000/cdst.1111
8. "Augmenting Human Expertise in Medicine and Healthcare" (2019)
9. Article R 4127-33 du Code de la santé publique