Article Highlight | 6-May-2026

Kidney stones and AI

can a chatbot replace a doctor?

Wroclaw Medical University

Kidney stones and AI: can a chatbot replace a doctor? 

More and more patients, instead of going straight to a doctor, first consult a chatbot about their symptoms. Artificial intelligence responds quickly, clearly, and without queues. But are its answers safe for patients?

This question was posed by researchers and students at Wroclaw Medical University, who analyzed how language models handle the most common questions about urinary tract stones.

The study, published in the journal Artificial Intelligence Review, shows that AI-generated answers are usually correct and substantively valuable. At the same time, it reveals their greatest weakness: not obvious errors, but rather the lack of a full clinical context, which is crucial in medicine.

The greatest risk is “almost correct” answers

Language models very rarely provided completely incorrect answers. Much more often, the issue concerned partially accurate information—answers that sound credible but omit important diagnostic or therapeutic details. These are precisely the ones that may lead patients to make incorrect decisions.

As emphasized by the study’s co-author, Wojciech Tomczak, a resident physician and PhD candidate at the University Center of Urology:

From a clinical perspective, partially accurate answers are more important than obvious errors—because they are the most misleading for patients when they sound credible but do not reflect the full diagnostic or therapeutic context.

Kidney stones

Urinary tract stone disease is a condition in which treatment depends on many factors—from observation and conservative management to surgical intervention. This is why simplified answers can be particularly risky.

Incorrect or only partially accurate answers are especially dangerous in this category, as they may encourage patients to self-medicate or to ignore alarm symptoms.

In practice, this means that a patient relying on AI responses may delay seeking medical care or take inappropriate actions on their own. In extreme cases, the consequences may be very serious, including loss of kidney function, severe infections, or even death.

Why does AI seem credible?

From the user’s perspective, chatbots have a significant advantage: they are understandable, structured, and sound convincing. The problem is that patients often do not recognize the difference between a complete and medically accurate answer and one that is simplified or incomplete.

From the patient’s perspective, this difference will probably not be clearly noticeable, because the recipient most often evaluates the answer primarily through its clarity and apparent credibility.

This makes even incomplete or simplified information appear sufficient.

AI as support

The authors of the study emphasize that language models have real potential in health education. They help to understand a problem, organize knowledge, and prepare for a conversation with a doctor. However, this requires conscious use of these tools.

The final assessment of the situation should always be the doctor’s.

Thus, the key role of medical consultation remains, where previously obtained information can be verified and placed in the context of a specific case.

Technology is only a partial solution

The study by the team from Wroclaw Medical University shows that the development of AI in medicine does not eliminate the need for health education. On the contrary, patients need reliable sources of knowledge, the ability to critically assess information, and easy access to specialists.

A chatbot can be the first step in understanding symptoms. It should not be the last.


The material is based on the article: 

The quality of AI-generated answers for patient inquiries on urolithiasis: a comparative study of ChatGPT and Deepseek https://doi.org/10.1007/s10462-025-11478-2 

Authors: Wojciech Tomczak, Jan Łaszkiewicz, Łukasz Nowak, Łukasz Biesiadecki, Klaudia Molik, Katarzyna Grunwald, Joanna Chorbińska, Bartosz Małkiewicz, Tomasz Szydełko, Wojciech Krajewski 

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.