Like humans, AI can jump to conclusions, Mount Sinai study finds
Peer-Reviewed Publication
Updates every hour. Last Updated: 11-Sep-2025 11:11 ET (11-Sep-2025 15:11 GMT/UTC)
A study by investigators at the Icahn School of Medicine at Mount Sinai, in collaboration with colleagues from Rabin Medical Center in Israel and other collaborators, suggests that even the most advanced artificial intelligence (AI) models can make surprisingly simple mistakes when faced with complex medical ethics scenarios. The findings, which raise important questions about how and when to rely on large language models (LLMs), such as ChatGPT, in health care settings, were reported in the July 22 online issue of NPJ Digital Medicine.