Can AI Replace Your Therapist? New Study Says Not Just Yet

Image Credit: Pixabay

Chatbots are improving at conversation, but are they ready to provide real support in therapy? A new study from USC researchers indicates that large language models (LLMs) like ChatGPT still struggle to capture the subtle complexities of human connection.

That’s the takeaway from a study co-led by USC Ph.D. computer science students Mina Kian and Kaleen Shrestha, under the mentorship of renowned roboticist Professor Maja Matarić at USC’s Interaction Lab.

2025 NAACL Study Finds LLMs Still Lag Behind Humans in Delivering Quality Therapeutic Support

Presented at the 2025 North American Chapter of the Association for Computational Linguistics (NAACL) conference, the research revealed that large language models (LLMs) still fall short of human standards in producing effective therapeutic responses.

The study found that LLMs underperform in linguistic “entrainment”—the ability to adapt communication based on interaction—which is crucial in building rapport between therapists and clients. Strong entrainment has been linked to better therapy outcomes.

The research also included contributions from seven additional USC computer science scholars and Katrin Fischer, a Ph.D. student at the Annenberg School for Communication and Journalism.

Large language models (LLMs) are being explored for potential use in mental health care, though they’re not yet widely adopted in clinical cognitive behavioral therapy (CBT). Some studies have also raised serious concerns, including evidence of racial and gender bias.

There’s a troubling narrative emerging that LLMs could replace therapists,” said Mina Kian, a USC Ph.D. student. “Therapists undergo extensive education and clinical training, so the idea that a language model could simply step in is deeply problematic.”

Kian’s research focuses on socially assistive robots (SARs) as tools to support—not replace—therapists in mental health care. In a recent study titled “Using Linguistic Entrainment to Evaluate Large Language Models for Use in Cognitive Behavioral Therapy,” her team examined how well ChatGPT 3.5-turbo handled CBT-style exercises.

Chatbot Therapy Shows Promise: Study Tracks How Well AI Mirrors Users’ Language to Boost Engagement

Twenty-six university students participated in the study, using a chat-based platform to complete either cognitive restructuring or coping strategy exercises, designed to help manage stress. The researchers then analyzed transcripts for linguistic “entrainment”—how well the chatbot adapted its responses to the user’s language and emotional tone. Stronger entrainment typically correlates with greater engagement and openness in therapy sessions.

However, when compared to responses from professional therapists and peer supporters on Reddit, the LLM demonstrated consistently weaker entrainment.

There’s growing effort in the natural language processing field to rigorously evaluate LLMs in sensitive areas,” said co-author Kaleen Shrestha. “As these technologies gain influence, we need targeted studies like this to better understand their limitations and risks.”

While LLMs might help users navigate guided therapy exercises at home, Kian and her team stress they are no substitute for trained professionals.

I’d like to see more research assessing LLMs in a broader range of therapy styles beyond CBT—like motivational interviewing or dialectical behavior therapy (DBT),” Kian said. She also called for evaluations based on a wider set of therapeutic outcomes.

Kian plans to continue her work exploring SAR-supported CBT exercises, particularly for people with generalized anxiety disorder. “My goal is to help expand the toolkit therapists can use for at-home care,” she added.


Read the original article on: MedicalXpress

Read more: AI Robots Replace Weed Killers and Farm Workers