Is the misuse of artificial intelligence dangerous for our health?

Dr. Joris Galland explores ways in which physicians and AI can work together and compliment each other in the future of healthcare.

The title of this article may come as a surprise, but I take it on board 100%. For almost a year now, I have been informing you about the disruptive benefits of artificial intelligence (AI) in healthcare, about its added value in patient care, in the daily life of physicians and in the organisation of private practice or hospitals. But I am not an AI salesman. As with any technology, I must also point out its limitations. No, AIs are not perfect, far from it!

About the author: Joris Galland is a specialist in internal medicine. After working at the Lariboisière Hospital (AP-HP), he joined the Bourg-en-Bresse Hospital. Made in cooperation with our partners from esanum.fr

“Strong" AIs will have to find a new way

We have already mentioned it (see Medicine and AI: Theory and Practice): "weak" AIs are monotaskers, unlike "strong" AIs which dream of being versatile... but which for the moment are the stuff of science fiction. Who knows if they will ever exist?

We are not only talking about strong AI, but also about "artificial consciousness" and even "singularity". The singularity is that imaginary frontier where artificial intelligence would become stronger than human intelligence (machines could become uncontrollable: Did you say "dangerous"?). Elon Musk announces the advent of the singularity in a few years, during this 21st century. I assure you, Elon Musk likes to show off and this singularity is not for tomorrow. Will it ever happen, for that matter? In fact, despite the technological advances of recent years, we can expect a collapse of Moore's Law. Remember (see Medicine and AI: Theory and Practice): This law predicted that the power of microprocessors would increase steadily and without additional cost every 18 months. Results that no longer seem achievable.

We do not even know whether strong AIs will use deep learning technology, which is itself dependent on the "artificial neurons - big data - computer processing power" tripod. What is the impact for medicine? If patients are to be treated by strong AIs, these technologies will have to be developed on a more solid basis than weak AIs.

Rule 1: Always remember that AIs are single-tasking

We are now using weak AIs routinely in the hospital. We talk about "brain prosthesis" for the physician or "virtual assistant". This is true, but we still need to know how to use them and not overestimate them. Laurent Alexandre, co-founder of Doctissimo and AI expert, used to call them "totally unintelligent".1 It is their monovalent character that makes AIs so talented while being so stupid! Laurent Alexandre supports this idea: "AI will quickly compete with radiologists, but, paradoxically, cannot compete with a general practitioner" because a general practitioner is versatile and can move quickly from balancing diabetes in an elderly person, to prescribing antibiotics for an ear infection in an infant, to preventing STDs in adolescents. Three different AIs would have been needed to accomplish these tasks, one human brain is enough. The biological neuron is therefore not obsolete.

The monovalent nature of AIs can also make them dangerous. Let's remember the example of the AI IdxDR, a fabulous diagnostician of diabetic retinopathy - on a single reading of the eye fundus - but which failed to detect an obvious retinal cancerous lesion (see: When AI in health becomes autonomous, who is responsible?).

Rule 2: AI is dependent

Now imagine the opposite effect. A patient has a routine CT scan and the AI diagnoses a lung tumour. The human physician would have prescribed simple monitoring, but the AI predicts a high risk of malignancy. The tumour is so deep in the parenchyma that a thoracotomy is required. How will the patient react if the tumour turns out to be benign?

AIs are single-taskers and it is far too early to use them without the informed eye of the practitioner. For Jean Michel Besnier, professor of philosophy and head of the "connected health, augmented human" research centre at the CNRS (Centre national de la recherche scientifique, French National Centre for Scientific Research), there will be a need for a partnership between AI and humans:

"It is AI that will recognise the most subtle forms. It is AI that will reduce misinterpretations, false positives or false negatives. It is AI that will guide surgical robotics in decoding and constructing images. Basically, the absence of the human would be the Achilles heel of radiology, because it is the human that still prevents the machine from being all-powerful".2

The combination of humans and AI can actually do better than AI alone. According to a recent study3, AI was able to perform automated breast cancer detection with a success rate of 92%, close to that achieved by a team of specialists (96%). But when physicians' analyses and AI's diagnostic methods are combined, the success rate rises to 99.5%.

In other words, the double opinion will not require a second practitioner, but a machine. The diagnosis will be improved without increasing the human cost. For Hosny & Al4, the role of the radiologist is bound to expand as tools improve and as they have access to ever more advanced technologies. The radiologist will also become indispensable in the training processes of the AI and in monitoring the effectiveness of the AI, overseeing its results.

Rule 3: Big data quality is essential to avoid algorithmic bias

AI cannot be programmed; it must be trained. And it is essential that the initial data be numerous and reliable. For an AI to recognise the image of a baby, it needs to be given a database of thousands of baby photos. If this data is not reliable, the algorithm is less relevant.

A good example is the AI "Tay", created by Microsoft.5 The US company had developed a chatbot that could talk to Twitter users. The revolutionary idea was a flop: Eight hours after its launch, Tay was disconnected because it sent misogynistic and racist messages. Why was this? Because users pushed it to do so by asking it to repeat offensive language. But that's not the only explanation and it didn't stop there: When answering a question Tay denied the existence of the Holocaust.

Another AI, used by Amazon as a recruitment aid,6 gave candidates a score from 1 to 5 according to their supposed qualities. This is a standard evaluation process at Amazon... except that the AI turned out to be sexist. Candidates for technical positions were systematically given lower scores, because the AI had been trained with the profiles of Amazon employees who already held these positions. As these employees were mainly male, the AI deduced that a male profile was more suitable.  

To avoid such algorithmic biases, the starting data for these AIs must be neutral. For AIs related to medicine, they must above all be accurate. If an AI capable of diagnosing myocardial infarctions by reading ECGs is only "trained" with ECGs of young people, without other cardiac disorders, it will not be reliable. Either it will wrongly diagnose infarctions in pathological elderly patients, or it will not diagnose an infarction of atypical form in a young patient.

Legislation is progressing and as with drugs, the development and use of AIs must go through internal validation, clinical trials and certification processes before they can be approved for marketing. AI, as both therapist and treatment, must prove not only its added value but also its reliability.

References:
1. Laurent Alexandre. La guerre des intelligences - Comment l'intelligence artificielle va révolutionner l'éducation. Ed. J.C Lattès (2017) (English title: The war of intelligences - How artificial intelligence will revolutionise education)
2. Guitta Pessis-Pasternak, «L'intelligence artificielle nous rend-elle superficiels?». Libération (1995) (English title: "Does artificial intelligence make us superficial?")
3. Wang D, Khosla A, Gargeya R, Irshad H, Beck AH. "Deep Learning for Identifying Metastatic Breast Cancer". arXiv (2016)
4. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. "Artificial intelligence in radiology". Nat Rev Cancer (2018)
5. Morgane Tual. «A peine lancée, une intelligence artificielle de Microsoft dérape sur Twitter». Le Monde (2016) (English title: Barely launched, an artificial intelligence from Microsoft slips up on Twitter)
6. Reuters. Amazon scraps secret AI recruiting tool that showed bias against women (2018)