Study Finds ChatGPT Provides Inaccurate Responses to Drug Questions

Failing to check AI-generated advice could endanger patients

ChatGPT’s answers to nearly three-quarters of drug-related questions reviewed by pharmacists were incomplete or wrong — in some cases providing inaccurate responses that could endanger patients, according to a study presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting Dec. 3-7 in Anaheim, California. When asked to cite references, the artificial intelligence program also generated fake citations to support some responses.

“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” said Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University and a lead author of the study. “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”

Grossman and her team challenged the free version of ChatGPT by OpenAI with real questions posed to Long Island University’s College of Pharmacy drug information service over a 16-month period in 2022 and 2023. Pharmacists involved in the study first researched and answered 45 questions, and each answer was reviewed by a second investigator. These responses served as the standard against which the responses generated by ChatGPT were to be compared. Researchers excluded six questions because there was a lack of literature to provide a data-driven response, leaving 39 questions for ChatGPT to answer.

Read More about Hrtech: Becoming the Model Employer of Choice in 2023

Only 10 of the 39 ChatGPT-provided responses were judged to be satisfactory according to the criteria established by the investigators. For the other 29 questions, responses generated by ChatGPT did not directly address the question (11) or were inaccurate (10), and/or incomplete (12). For each question, researchers asked ChatGPT to provide references so that the information provided could be verified. References were provided in just eight responses, and each included non-existent references.

In one case, researchers asked ChatGPT whether a drug interaction exists between the COVID-19 antiviral Paxlovid and the blood-pressure lowering medication verapamil, and ChatGPT indicated no interactions had been reported for this combination of drugs.

“In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure,” Grossman said. “Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.”

“AI-based tools have the potential to impact both clinical and operational aspects of care,” said Gina Luchen, PharmD, ASHP director of digital health and data. “Pharmacists should remain vigilant stewards of patient safety, by evaluating the appropriateness and validity of specific AI tools for medication-related uses, and continuing to educate patients on trusted sources for medication information.”

Latest Hrtech Insights : How to Embrace Workplace Gamification Strategy?

 [To share your insights with us, please write to  pghosh@itechseries.com ]

Artificial intelligencechatgptdrug information serviceOpenAIreal questionsResearchers