Medicine and AI: Theory and Practice (1/2)

In three days, artificial intelligence learned how to beat the best Go player. Is this an endgame for our biological brains? No, because this AI remains "weak". But, what is artificial intelligence?

Dr. Joris Galland breaks done the implications of artificial intelligence for medicine

About the author: Dr. Joris Galland is a specialist in internal medicine. After practicing at the Lariboisière Hospital (Paris, France) he joined the Bourg-en-Bresse Hospital (France). Passionate about new technologies, he offers to explain the issues at stake for the future of AI in medical research, practice, and policy.

Translated from the original French version.

The victories of IBM's Deep Blue over Garry Kasparov (1997) and DeepMind's AlphaGo1 (2016) over the world's best Go players - a game whose complex rules and a number of possible combinations seemed to give humans a considerable advantage - have rekindled the specter of computer intelligence dethroning the biological brain.2

This recent evolution has been quite particular, yet, AI can generate fears. Whereas in the 19th-century industries saw the dawn of machines replacing workers, AI is now perceived as capable of replacing humans for "intellectual" tasks.

Medicine is no exception to the rule. There seems to be some confusion about its current (exponential) development and its integration into our clinical practice. But are you really familiar with AI? Who among you would be able to give a precise definition of it? Is it really a threat to physicians?

What is AI?

AI is not taught in medical school. And we know also that ignorance leads to fear, confusion, and misinformation. So definitions are an essential starting point.

Historically, it was the mathematician John McCarthy and the cognitivist Marvin Lee Minsky who coined the term "artificial intelligence" in 1956. Lee Minsky defined AI as: "the construction of computer programs that engage in tasks that are more satisfactorily performed by human beings because they require high-level mental processes such as perceptual learning, memory organization, and critical reasoning". AI thus refers to the use of multiple technologies, born in the 1950s, based on the use of algorithms.

An “algorithm" is a mathematical concept dating from the 9th century; it is derived from the Latinization of the name of the mathematician Al-Khawarizmi. An algorithm is a finite sequence of operations or instructions which, with the help of inputs, makes it possible to solve a problem (outputs). Algorithms are omnipresent in our daily lives: for example, cooking a dish (problem) using different ingredients and utensils according to a recipe (starters) is an algorithm.

Schematically, an algorithm allows a computer to respond to the problems we submit to it. Thus, when translating a text (problem), the translation algorithm receives inputs (the text to be translated) and provides outputs (the translated text).

AI, as we know it today, was born... out of failures of all kinds since the 1960s. However, it is now accepted that one of the greatest successes in terms of AI is machine learning. For years, humans have been trying to program computers by copying human reasoning: it was wrong. Today's AIs learn, improve, and train themselves, but they do not program themselves: they create their own algorithms. To do this, they need three ingredients, explained next.

Computing power

This power comes from the computer's processor. In 1965, Gordon Moore, co-founder of the Intel Corporation, stated that the number of transistors in the processors of a silicon chip (a reflection of a computer's computing power) would double, at constant prices, every 18 months3. In 2019, this "Moore's law" is still verifiable. Indeed, the "super calculations'' of a computer are performed by the microprocessor (CPU - central processing unit), boosted by the machine's RAM systems.

The greater the number of transistors, the greater the computing power of the CPU. The smaller the transistor, the faster the signal conduction speed, and the higher the number of transistors in a CPU. In the 1970s, a CPU contained between 3,000 and 30,000 micrometer-sized transistors and allowed 1 million operations per second (Megaflop). In 2010, a commercial processor Intel™ had one billion 14 nm (nanometre) transistors. But Moore's law is in danger of reaching its limits: the transistor will soon reach the size of an atom, yet current technologies do not allow the manufacture of "intra-atomic" transistors. Unless the quantum computer delivers its promises...

Data, lots of data

We are talking about big data. While a baby's brain will recognize a kitten after seeing two or three of them, the AI will have to see millions of pictures of kittens. Only the big computer multinationals are capable of having so much data. Siri, Apple's AI, can recognize faces and animals in the photos on an iPhone. But it was only after it had collected billions of photos on its server that it achieved this feat.

This is where the American GAFAMI (Google, Apple, Facebook, Amazon, Microsoft, Intel, also known as FAAMG or Big Tech) and the Chinese BATX (Baidu, Alibaba, Tencent, Xiaomi) come in. These companies collect the most data in the world thanks to their users. They have embarked on improving AIs by using data collected via their users (for example a simple post on Facebook). Because this initial data is essential: the more different experiences the system accumulates, the more efficient it will be.

Deep learning

This technology uses multi-layered artificial neural networks and gives AIs the revolutionary ability to learn (deep learning is a subcategory of machine learning). This multi-layer system is directly inspired by the human brain. In computer science, the theory of artificial neurons is the opposite of traditional methods of computer resolution: one no longer builds a program step by step according to an understanding of it. Here, artificial neural networks adjust themselves autonomously according to their learning, without human intervention.

Each layer of artificial neurons corresponds to an aspect of data processing (that is, for the recognition of an image, each layer corresponds to a particular aspect of the image). At each layer, the "wrong" answers are eliminated and sent back to the upstream levels to adjust the mathematical model (a process referred to as "backpropagation").

For example, AlphaGo Zero, AlphaGo's successor (the first Go game champion AI), learned to play by playing games against itself (unsupervised learning), unlike its predecessor, which learned the game of Go by observing human games (supervised learning). In the end, AlphaGo zero reached a level worthy of the best human players in only three days.2

The problem with the learning machine is that the AI is able to create an algorithm completely incomprehensible to humans. This is what is known as the "black box" of AI and raises major ethical questions. How can we trust something that we cannot explain?

Towards a "strong" AI?

In spite of all these advances, it must be kept in mind that technological advances only allow the creation of monotask AI programs (also known as single-tasking). This is why we speak of "weak" AI. But the arrival of so-called "strong" AIs, capable of performing several tasks, could lead to a "singularity", i.e. an AI surpassing man and endowed with artificial consciousness. No one knows if or when this strong AI will see the light of day. But some estimates place this scenario to be taking place between 2030 and 2100, if at all, according to the experts.

After this overview, what can AI bring to medicine? Let's tackle this issue in part II of this article.

References:
1. A company belonging to Google®
2. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al. Mastering the game of Go without human knowledge. Nature. 2017 18;550(7676):354–9.
3. Moore's Law. In Wikipedia [Internet]. 2020 [cited 2020 Sep 22].