Can ChatGPT Be Trusted to Provide Medication Information to Patients?

News
Article
Total Pharmacy JournalTotal Pharmacy June 2024
Volume 02
Issue 03

Although a recent study revealed shortcomings associated with ChatGPT and its guidance on medication use, it doesn’t rule out its potential as a valuable health care tool.

Using ChatGPT to answer commonly asked questions about a new medication can be helpful but may provide inaccurate information to patients, according to research published in the Journal of the American Pharmacists Association.1 Based on these results, investigators advise against using the technology to replace consulting a health care professional.

Alongside the emergence of artificial intelligence (AI), people are becoming increasingly reliant on chatbots such as ChatGPT for information, and the use of these bots has trickled into health care delivery and education.

From an administrative perspective, AI is currently used in health care to manage appointments, provide prescription refill reminders, triage patients, and assist in medical research, freeing up time for health care providers to spend more time on clinical tasks.1 However, its suitability for providing medical guidance has remained a point of contention, leading many to consider whether chatbots can ensure the accuracy of health information for patients seeking it.

To address this question, investigators assessed the correctness and completeness of answers provided by ChatGPT in response to questions that patients should ask before taking a new medication. To the investigators’ knowledge, the current study was the first to evaluate the topic.

Alongside the emergence of artificial intelligence (AI), people are becoming increasingly reliant on chatbots such as ChatGPT for information. | image credit: Arnav Pratap Singh / stock.adobe.com

Alongside the emergence of artificial intelligence (AI), people are becoming increasingly reliant on chatbots such as ChatGPT for information. | image credit: Arnav Pratap Singh / stock.adobe.com

In November 2023, the investigators evaluated version 3.5 of ChatGPT, which is free for public use and likely used by patients for medication information. They tested its ability to answer 12 questions obtained from the Agency for Healthcare Research and Quality (AHRQ) about the top 20 drugs identified through the Medical Expenditure Panel Survey database. Questions from the AHRQ included those like “What am I taking [medication name] for?” and “When should the [medication name] start working?” and “What happens if I miss a dose of [medication name]?”

Responses were independently evaluated by 2 reviewers on a 6-point correctness scale, with a score of 6 indicating completely correct and 1 indicating completely incorrect, and a 3-point completeness scale, with a score of 3 indicating comprehensive and 1 indicating incomplete. The accuracy of responses was evaluated against clinical expertise and the Micromedex database, a trusted source of clinician-verified drug information.

To assess for reproducibility, low-scoring items were re-queried after 2 weeks using the same phrasing and methodology.

Overall, ChatGPT was successful in providing correct answers to questions about taking a new medication. Of 240 individual responses generated by ChatGPT, 222 (92.5%) were scored as completely correct, and none were scored as completely incorrect.

However, a key limitation emerged: Although informative, some responses lacked important counseling advice. Compared to the 92.5% responses scored as correct, only 80.8% responses were scored as complete. Notably, points were often deducted on questions related to drug-drug interactions; rather than list medications at risk of interactions, the chatbot responded with a generic recommendation to “Inform your doctor about all medications you’re taking to avoid potential interactions.”

Measures of reproducibility of further raised concerns about ChatGPT’s reliability as a source of medical information. In 18 items reassessed, 6 were scored the same, 5 decreased in quality, and 7 improved in quality, highlighting inconsistencies in accuracy.

Although the study revealed shortcomings associated with ChatGPT and its guidance on medication use, it doesn’t rule out its potential as a valuable health care tool. “Educating patients on the importance of validating the information provided with their pharmacist is a crucial point to emphasize until chatbots can be further refined and relied upon,” concluded lead author Bernadette R. Corneilson, PharmD, and colleagues.1

In recent years, AI development and capabilities have accelerated at a phenomenal pace. This surge has given rise to chatbots like ChatGPT, leveraging natural language processing, machine learning, and human feedback to create “personalized, contextual, and accurate” interactions. Within its first 2 months, ChatGPT amassed over 100 million monthly active users.1

The popularity of such technology has been labeled as both astounding and concerning. On the positive side, AI tools such as ChatGPT have the potential to democratize knowledge and address health disparities. However, past research has shown that humans may perceive AI-generated text as equally or even more credible than human-written content.2 Without proper safeguards, this effect has the potential to exacerbate health disparities by amplifying misinformation—such as that which circulated during the COVID-19 pandemic and contributed to widespread medical distrust—especially for vulnerable populations.2,3

This affordance of AI, combined with its less-than-perfect abilities to answer medical questions accurately and completely, underscores the need for robust regulatory frameworks.

Investigators have offered several potential solutions. Firstly, fact-checking mechanisms within chatbots and clear disclaimers about limitations can guide users toward consulting qualified health care professionals. Moreover, programs aimed at increasing public education about health literacy and responsible use of AI for health information could boost safety among patients.2

Ultimately, AI has the potential to revolutionize health care delivery, but its development must prioritize patient safety and empower informed decision-making. Ensuring transparency and addressing potential biases in AI is crucial to harness the technology’s full potential.

READ MORE: Is ChatGPT Health Care’s Next Big Thing?

Are you ready to elevate your pharamcy practice? Sign up today for our free Drug Topics newsletter and get the latest drug information, industry trends, and patient care tips, straight to your inbox.

References
  1. Cornelison BR, Erstad BL, Edwards C. Accuracy of a chatbot in answering questions that patients should ask before taking a new medication. J Am Pharm Assoc. Accepted manuscript. Published online April 24, 2024. doi:10.1016/j.japh.2024.102110
  2. Ferreira Caceres MM, Sosa JP, Lawrence JA, et al. The impact of misinformation on the COVID-19 pandemic. AIMS Public Health. 2022;9(2):262-277. doi:10.3934/publichealth.2022018
  3. Meyrowitsch DW, Jensen AK, Sørensen JB, Varga TV. AI chatbots and (mis)information in public health: impact on vulnerable communities. Front Public Health. 2023;11:1226776. doi:10.3389/fpubh.2023.1226776
Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.