News

Study reveals ChatGPT’s limitations in providing accurate medical information

study

A recent study conducted by researchers at Long Island University examined the accuracy of the free version of ChatGPT in responding to medication-related queries.

The study involved comparing the AI’s responses to 39 real drug-related questions from the university’s College of Pharmacy with those provided by trained pharmacists.

The findings, presented at the American Society for Health-Systems Pharmacists meeting, revealed that ChatGPT accurately addressed only around 10 of the queries, leaving the other 29 with incomplete or inaccurate answers.

Sara Grossman, an associate professor at Long Island University and one of the study’s authors, expressed concern over the potential risks associated with relying on ChatGPT for health and medication inquiries.

One example highlighted by the study was a question regarding potential drug interactions between the Covid-19 antiviral medication Paxlovid and the blood-pressure lowering medication verapamil.

ChatGPT incorrectly suggested that combining the two drugs would have no adverse effects, potentially endangering individuals by not addressing the potential significant drop in blood pressure.

Further investigation revealed that ChatGPT provided fictitious scientific references to support its responses.

When asked for citations, the AI fabricated references, seemingly from legitimate scientific journals, causing concerns about the credibility of the information it provided.

Grossman highlighted a scenario where ChatGPT suggested a dosage conversion for the muscle spasm medication baclofen, offering incorrect and unfounded conversion ratios.

Following such guidance could result in significant dosage errors, potentially causing withdrawal symptoms in patients.

study

The study echoed previous concerns about ChatGPT generating deceptive references, raising worries about the credibility of the information it offers.

Grossman highlighted the confidence and sophistication with which ChatGPT delivers its responses, potentially misleading users into believing in its accuracy.

In response to these concerns, an OpenAI spokesperson advised users against relying on ChatGPT as a substitute for professional medical advice or treatment, emphasizing that the model isn’t tailored for medical information or diagnosis of serious medical conditions.

Grossman cautioned against solely depending on ChatGPT or similar platforms for medical information, suggesting reputable sources like the National Institutes of Health’s MedlinePlus page for reliable medical information.

Despite the convenience of online sources, Grossman stressed the importance of healthcare professionals’ expertise in addressing individual patient needs.

Source-CNN

Tags

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button
Close
Close