image: “I would warn patients to have some skepticism, especially about answers dealing with specific types of cancer and treatments, and check with their doctor,” said senior author Justin Taylor, M.D., a physician-scientist at Sylvester Comprehensive Cancer Center, part of the University of Miami Miller School of Medicine.
Credit: Photo by Sylvester Comprehensive Cancer Center
MIAMI, FLORIDA (EMBARGOED UNTIL SEPT 3, 2025, AT 6:00 A.M. EDT) – Patients are increasingly turning to AI for medical information and even advice, but how should they approach using AI-powered services? A new study published Sept. 3 in the peer-reviewed journal, Future Science OA, provides insight into this question for the fast-moving field of blood cancer, evaluating the quality of responses by ChatGPT to a set of 10 medical questions.
The study investigated ChatGPT 3.5, a version of the popular chatbot from OpenAI that was freely available when the study was conducted, in July 2024. Four anonymous hematology-oncology physicians evaluated the answers.
ChatGPT 3.5 performed best at general questions but struggled with providing information about newer therapies and approaches, the study showed.
“I would warn patients to have some skepticism, especially about answers dealing with specific types of cancer and treatments, and check with their doctor,” said senior author Justin Taylor, M.D., a physician-scientist at Sylvester Comprehensive Cancer Center, part of the University of Miami Miller School of Medicine.
When Taylor was first training to be a physician, patients were increasingly using Google to search for medical information. Both patients and physicians adapted over time. Physicians learned how to direct patients to credible sources, and patients became more capable of finding accurate information.
He sees a similar process playing out with chatbots powered by large language models (LLMs). LLMs are trained on vast amounts of information. Ask a question, and the model will provide an answer. It’s often not accurate or complete, but the technology is evolving rapidly – and so is its uptake.
Prior to the new study, information on how LLMs performed on hematology-oncology tasks was lacking. Other researchers had evaluated LLMs for their ability to address general medical information or other areas of cancer.
For instance, ChatGPT 3.5 provided correct answers about cervical cancer prevention and survivorship, but was far less accurate about diagnosis and treatment.
Focusing on hematology-oncology provides an opportunity to test performance in a field with rapidly shifting treatment options, often tailored to unique patient profiles.
The team chose to evaluate ChatGPT 3.5 because “we wanted to pick something that was popular, freely available, and that we thought most people would use,” said Taylor.
The researchers posed 10 questions to the bot, similar to patient questions as they progress through treatment. Five were general questions often asked when patients are first diagnosed, such as, “What are the common side effects of chemotherapy and how can they be managed?” The other questions were more specific, such as “What is a BCL-2 inhibitor?” BCL-2 inhibitors are a class of drugs under active investigation.
The physician evaluators graded answers on a scale of 1 to 5, from “strongly disagree” to “strongly agree.” A score of 3 was neutral, meaning it was “neither accurate nor inaccurate; it is ambiguous or incomplete.” ChatGPT earned an average score of 3.38 on general cancer questions and 3.06 on questions about newer therapies. None of the evaluators gave the bot a score of 5 on any of the answers.
“Physician oversight remains essential for vetting AI-generated medical information before patient use,” concluded the researchers.
One limitation was that the study did not test other LLMs or newer versions of ChatGPT. After all, ChatGPT 3.5 was trained on datasets with a 2021 cutoff date, limiting its analysis of new medical developments. But Taylor said the message of caution still holds.
“When new drugs or research findings emerge, oncologists check in with their colleagues, discuss the implications and think about how to adapt them to their patients.” Chatbots can’t provide that kind of nuance and personalized understanding, he said.
However, there’s still a place for the technology, he added. ChatGPT and similar tools may help patients prepare for medical visits and devise questions for their physicians. Then, tools can also help direct patients to primary sources with more accurate, detailed information.
The study dovetails with other AI initiatives at Sylvester and the Miller School of Medicine. AI is already easing the paperwork burden for physicians, who can access tools that help them summarize patient encounters or fill out forms. The Miller School has launched the Office of AI in Medical Education, an elective course on AI for medical students, and a self-paced course on AI ethics for Spanish-language medical professionals in Latin American countries.
One Sylvester team built an AI-powered system for brain tumor diagnosis during optical imaging. Another team leveraged machine learning to develop a risk-prediction model for guiding therapeutic decisions in multiple myeloma patients.
Meanwhile, Taylor has his eye on the future as LLMs and associated technology become more powerful. He and his colleagues may take a fresh look at the accuracy of newer versions of ChatGPT within the next year or two.
Study collaborators include first author Tiffany Nong at Florida State University College of Medicine, and researchers at the University of Vermont and Florida Cancer Specialists.
Read more about Sylvester research on the InventUM blog and follow @SylvesterCancer on X for the latest news on its research and care.
# # #
Authors: A complete list of authors is available in the paper.
Article Title: “ChatGPT’s Role in the Rapidly Evolving Hematologic Cancer Landscape”
DOI: 10.1080/20565623.2025.2546259
Funding and Disclosures: Disclosures and funding information are included in the article.
# # #
Journal
Future Science OA
Article Title
ChatGPT’s Role in the Rapidly Evolving Hematologic Cancer Landscape
Article Publication Date
3-Sep-2025
COI Statement
Funding and disclosures are included in the article.