Assoc Prof. Vaghefi: AI helps predict cardiovascular risks in retina scan

Associate Prof. Vaghefi discusses AI retina scan algorithms, how the retina can detect other health issues, and how the retinal tech was deployed in rural India.

Predicting heart attacks and strokes on the retina, pixel by pixel

"The retina is the only place where we can literally photograph the cardiovascular system"

Associate Prof. Ehsan Vaghefi

About Associate Prof. Ehsan Vaghefi

Associate Prof. Vaghefi's academic and entrepreneurial career has been focused on preventing blindness through accessible and novel technologies. This is because his father lost his eyesight as a child, due to a preventable but undiagnosed disease. Associate Prof. Vaghefi has been a guide all his life. After all, when other children his age were being helped across the street by their parents, he was helping his blind father across roads. 

At present, Associate Prof. Vaghefi is an associate professor of Medical Imaging and Artificial Intelligence in Optometry and Ophthalmology at the University of Auckland. As an academic, he has published more than 45 peer-reviewed journal articles and 3 patents and has been awarded more than NZ$10M in research grants. In 2018, Associate Prof. Vaghefi and Dr. David Squirrell co-founded Toku Eyes, a social enterprise focused on Vaghefi's goal: helping prevent blindness on a large scale. Today, Toku Eyes' AI platform is being used in many clinics across the US, India, Australia, and New Zealand, identifying people at the risk of blindness from downtown LA to rural India.

Annabelle Eckert: What can you tell us about the AI retina scan?

AI is essentially a pattern recognition technology, but it is more complex than other technologies that we've seen before. Our retina has a lot of information about us, our eye, our cardiovascular system, our kidneys. All of that is just one photograph away. The cool thing about AI is that it can look at every single pixel in an image and it can look for a pattern. If you have millions of these retinal images, our annotations in the background of the image, and the patient they belong to, then you can look at all of these similarities and patterns in these groups and can try to come up with a predictive algorithm that hopefully can assist in recognizing similar patterns in other retinas.

It is one of those things that are simple but difficult. People often misunderstand simple and easy. Things that are simple are not easy. Things can be simple but difficult and AI is one of those, because it is easy in concept and difficult in implementation. I can get to that later. But we are very lucky that we live in a day and age where we have the capacity now to extract all these patterns and data out of retinas.

Annabelle Eckert: Why is the retina so good in predicting heart attacks and strokes? The other doctors also want to know why the retina is perfect for these types of scans.

The retina is the only place where we can literally photograph the cardiovascular system. Anything that affects your CV system will also affect the microvasculature in the eye. AI is good at looking at these patterns and understands the shape, color, and algorithmic changes in the cardiovascular system that are caused by anything. And that anything could be CV risks and CV performance of a human. The good thing about the retina is that our retina is actually a record of our life, whatever you´ve done through your life, is written somewhere there in the retina. If you have been on a poor diet, if you haven't been exercising, if you have had any of those issues, it makes a permanent change in the retina.

Actually, we just published a paper showing that if you've been smoking, it changes your retina. Because essentially hypoxia changes the retina. So, it's very powerful in that it accumulates all the damage, behavioural and genetic information and adds it into one photograph. I believe very soon you will see that a retinal AI may be more accurate in predicting your CV risks than your blood tests. Because blood tests can fluctuate from day to day and time to time, and season to season. And we know that, and that is why we keep repeating them whereas the retinal image does not change, and it does not lie.

Annabelle Eckert: How many training runs did the AI need before it could reliably predict heart attack and stroke?

That's the difficult part. Unfortunately, I see a lot of scientific research that is being published and they are very limited in the data and the number of AI tried and how they created the AI. We can say that AI fails silently, in that AI does not crash, it always gives you an answer. But that answer might be completely irrelevant. It is not like a computer code that crashes and then you get a blue screen. So, the [reliability building] is still ongoing. And AI is never complete. It never is. Some research indicated that apparently there is a ceiling on how good an AI can get by just filling your data into it and then it does not get any better. That is not true.

You can assess an AI performance using accuracy, and accuracy cannot be improved too much by just keeping adding data to your AI. This does not improve the generalisability of an AI, and that is the key performance that is missed today. Lacking generalizability is that the AI works on this particular dataset for these people, but if i take it and put it in this other clinic with another cohort of people, it will fail. And if I use it in another camera setting, or another ethnicity, it will fail. So the AI can become more accurate continuously across the board. So in our work we have never stopped training our AIs. We keep training it not because the percentage of accuracy gets better, but because it becomes more inclusive and it can become a good performer in many different settings, such as different cameras, different ethnicities, different cohorts of life.

Annabelle Eckert: What are the possibilities offered by the AI retina scan?

We are just working on CV diseases, but very recently we just got started on kidney diseases. If someone has kidney imbalances, it shows up in their retina. We know that the retina is extremely sensitive to imbalances in osmosis. And guess what? The kidney is the main contributing factor to the entire osmosis balance of the body. The amount of work (with the retina) that can be done is limitless. I have seen work that is tackling neurological diseases such as dementia, Parkinson's disease, Alzheimer’s. We are not doing that just yet, but I can see how it would work.

Annabelle Eckert: Are you currently working on other projects with other diseases that implement the retina scan? And what can we look forward to over the next few years?

The kidney diseases field is the one that we are new to, but over the next few years we will work in converting our AI into products. That is a field of research that is often missed. In academic research I see that you do a project, and you get a publication, but this does not move on to become a product. That is because making a product out of a science project is extremely difficult because of generalisability, reliability, and how they affect commercial applications. So, in the next few years what you will see from us is that we will make our research into more of a product, one in which we make sure that it works everywhere, for everyone, under every system.

Annabelle Eckert: What experience have you had with the AI THEIA™ over the last few years?

There are several aspects to that question. First there is the permissions aspect. Right now, I am sitting in Orlando because we just started a project here. In this project, within a single day the clinicians got used to the AI. By the end of the day, they were running the AI in parallel to their work. They would use the AI to consult it as a second opinion, and then it became a very natural process. And I think that this is key.

The experience with THEIA™ is an experience that comes naturally to a clinician. It’s not intrusive, it's not trying to replace their work, it's not trying to do the work for them. It becomes like a second consultant. When you have a question, the first thing you do is you google it and get a second opinion. We have designed THEIA to be that, an assistant. If you are sure what you are looking for, great, but if you just want a second opinion, it is there, you can look at it and you can consider it. That is the clinical side, which is extremely easy to understand and work with and becomes part of the practice within a single day.

From an adaptation point of view, the more clinics we get to, the more that we get referred on. So in every clinic they see the product, they like it and they refer us to more clinics, so this keeps coming. Another aspect is the social one. I´ve always believed that technology should enable the underserved community to have access to health, that otherwise they wouldn't. In western societies AI can create efficiencies, but in middle- and low-income countries, AI can provide a service that didn't exist before, so it's actually a life changing thing in many countries. As of today, our program THEIA is being used in twenty eye clinics in India as the provider. So it does not assist; it actually provides the role of a doctor. In many countries you do not have enough doctors to just look at the images, so the only way to address that is with technology.

I'm very proud of the work that the team has done because now everyday thousands of people in India (and we are talking about rural India, a population that is very underserved), that live on less than 1 US$ a day, these people get served by us. And we are not for profit, we don't care if they cannot pay, we just want to make sure that we give back to the community.

Further resources