News Release

AI as your therapist? not quite yet, says new USC study

USC study finds large language models fall short of humans in building therapeutic rapport – a critical factor in mental health care.

Reports and Proceedings

University of Southern California

Chatbots are getting better at holding conversations — but can they offer meaningful support in a therapy setting? A new study by USC researchers suggests that large language models (LLMs) such as ChatGPT still fall short when it comes to the nuances of human connection.

That’s the conclusion of research co-led by USC PhD computer science students Mina Kian and Kaleen Shrestha, under the guidance of pioneering roboticist Professor Maja Matarić at USC’s Interaction Lab.

Presented at the 2025 North American Chapter of the Association for Computational Linguistics (NAACL) conference, their study found that LLMs continue to lag behind humans in generating high-quality therapeutic responses.

LLMs, the study found, perform worse at linguistic “entrainment,” or responsive communication between interacting individuals, than expert and even non-expert humans. Entrainment is an important concept that therapists utilize to improve rapport with their client which in turn has been found to improve positive therapeutic outcomes.

In addition, seven other USC computer science researchers contributed to the study, along with Katrin Fischer, a PhD student from the Annenberg School for Communication and Journalism.

Support, Not Substitution
LLMs are increasingly being proposed for use in mental health care, though they’re not currently widely used in clinical cognitive behavioral therapy (CBT). And some studies have flagged significant risks, including racial and gender bias.

“We’re seeing a concerning narrative that LLMs could replace therapists,” says Kian. “Therapists go through years of schooling and clinical training to prepare for their client-facing role, and I find it highly concerning to suggest that LLM technology could just replace them.”

Kian’s own research focuses on socially assistive robots (SARs) in mental health care—not to replace therapists, but to support and extend their reach.

The team’s study, “Using Linguistic Entrainment to Evaluate Large Language Models for Use in Cognitive Behavioral Therapy,” explored how well a leading LLM (ChatGPT 3.5-turbo) performed in CBT-style homework exercises.

Participants—26 university students—logged into a chat-based platform powered by the LLM. They chose between cognitive restructuring and coping strategy exercises, which guided them through prompts to help process and manage stress.

The researchers analyzed transcripts of these interactions and found that stronger linguistic entrainment was associated with greater self-disclosure and engagement—markers of more effective therapeutic support. But in comparisons with human therapists and Reddit-based peer supporters, the LLM consistently showed lower levels of entrainment.

“There is a growing research effort in the natural language processing (NLP) community of careful validation of large language models in diverse sensitive domains,” says Shrestha. “We have gone past just pursuing human-like language generation as these technologies become more influential in everyone’s lives. Specific population case studies like this should be encouraged and shared as we navigate the complexities of large pretrained LLMs.”

Kian and her colleagues say that while LLMs could help guide at-home exercises, they’re no replacement for human clinicians.

“I would like to see more work assessing the performance of LLMs in therapeutic applications, looking into therapy styles beyond CBT, perhaps considering their use in motivational interviewing or DBT (Dialectical Behavior Therapy),” Kian says. “I would also like to see them evaluated with respect to other important therapeutic measures.”

Kian plans to continue her research on SAR-guided CBT homework exercises, evaluating if SARs can support individuals with generalized anxiety disorder. “I hope that this research can eventually be used to expand the at-home care technology available to therapists,” she says.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.