ChatGPT: can it provide good medical advice?

In a study, ChatGTP surpassed doctors in providing empathetic advice to patients' questions. Doctors collaborating with such technologies could revolutionise medicine.

ChatGPT answers patients' questions

Much has been said in recent months about how advances in artificial intelligence, particularly systems such as ChatGPT, could be used in medicine.

A new study published in JAMA Internal Medicine, led by Dr. John W. Ayers of the Qualcomm Institute at the University of California San Diego, offers a first look at the role that Large Language Models (LLM) could play in medicine. ChatGPT has already been shown to be capable of passing a medical licensing exam, but directly answering patients' questions accurately and empathetically is another matter.

The study compared doctors' written answers and ChatGPT's answers to real health questions. A group of healthcare professionals preferred ChatGPT answers 79% of the time and judged ChatGPT answers to be of higher quality and more empathetic.

ChatGPT ready for healthcare?

In the new study, the research team tried to answer the question: is ChatGPT able to accurately answer the questions patients send to their doctors? If so, artificial intelligence models could be integrated into healthcare systems to improve doctors' answers to questions sent by patients and ease the ever-increasing burden on them.

The pandemic of COVID-19 has accelerated the adoption of telemedicine systems. While this has made access to care easier for patients, doctors are now burdened by an avalanche of electronic messages from patients seeking health advice, which has contributed to the current record levels of doctor burnout.

AskDocs to test ChatGPT in healthcare

In order to obtain a large and diverse sample of health care questions and answers from doctors that did not contain personally identifiable information, the team turned to a social media site where millions of patients publicly ask questions about their health, and which are then answered by doctors: Reddit's AskDocs.

AskDocs is a subreddit with about 452,000 members who post questions and verified health professionals submit answers. Although anyone can answer a question, moderators verify the credentials of health professionals and answers show the credential level of the respondent. The result is a large and diverse set of questions from patients and answers from licensed medical professionals.

Although some may question whether question-and-answer exchanges on social media are a fair test, the members of the research team noted that the exchanges reflect their clinical experience.

The team sampled 195 exchanges from AskDocs in which a verified physician answered a public question. The team provided the original question to ChatGPT and asked them to write an answer. A group of three healthcare professionals evaluated each question and its answers, blind to whether the answer came from a physician or ChatGPT. They compared the answers based on quality of information and empathy, and indicated which they preferred.

ChatGPT answers better in 79% of cases

Research shows that ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of patients' questions than doctors' answers. In addition, ChatGPT responses were rated significantly higher in quality than those of physicians: responses of good or very good quality were 3.6 times higher for ChatGPT than for physicians (physicians 22.1% vs. ChatGPT 78.5%). Responses were also more empathetic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% vs. ChatGPT 45.1%).

Using ChatGPT to answer patients' messages

Although the study pitted ChatGPT against doctors, the ultimate solution, according to the authors, is not one in which the doctor is replaced by an artificial intelligence system. On the contrary, the doctor should use ChatGPT to give a better and more empathetic response to patients. The results suggest that tools such as ChatGPT can efficiently draft high-quality personalised medical advice for physicians to review.

According to the authors, ChatGPT could assist doctors in managing e-mail and other messages. In addition to improving workflow, investments in AI assistant messaging could have an impact on patient health and physician performance. According to the authors, it is important that the integration of AI assistants into healthcare messaging takes place in the context of a randomised controlled trial to assess the impact of the use of AI assistants on outcomes for both physicians and patients.  

The researchers also believe that these technologies could serve to teach physicians how to achieve patient-centred communication, eliminate health disparities suffered by certain population groups that often seek healthcare through messaging, and help physicians provide higher quality and more efficient care.

Reference