AI and psychosis: What to know, what to do
As concern grows about online chatbots and mental health, an expert cautions about potential risk to already-vulnerable people, especially teens and young adults
Michigan Medicine - University of Michigan
Psychiatrist Stephan Taylor, M.D., has treated patients with psychosis for decades.
He’s done research on why people suffer delusions, paranoia, hallucinations and detachment from reality, which can drive them to suicide or dangerous behavior.
But even he is surprised by the rapid rise in reports of people spiraling into psychosis-like symptoms or dying by suicide after using sophisticated artificial intelligence chatbots.
The ability to “talk” with an AI tool that reinforces and rewards what a person is thinking, doesn’t question their assumptions or conclusions, and has no human sense of morals, ethics, balance or humanity, can clearly create hazardous situations, he says.
And the better AI chatbots get at simulating real conversations and human language use, the more powerful they will get.
Taylor is especially worried about the potential effects on someone who is already prone to developing psychosis because of their age and underlying mental health or social situation.
He points to new data released by OpenAI, which runs the ChatGPT chatbot.
They report that a small percentage of users and messages each week may show signs of mental health emergencies related to psychosis or mania.
The company says new versions of its chatbot are designed to reduce these possibilities, which Taylor welcomes.
But as chair of the Department of Psychiatry at Michigan Medicine, the University of Michigan’s academic medical center, he worries that this is not enough.
Data from RAND show that as many as 13% of Americans between the ages of 12 and 21 are using generative AI for mental health advice, and that the percentage is even higher – 22% – among those ages 18 to 21, the peak years for onset of psychosis.
Chatbot as psychosis trigger?
Taylor knows from professional experience that psychosis can often start after a triggering event, in a person who has an underlying vulnerability.
For instance, a young person tries a strong drug for the first time, or experiences a harsh personal change like a romantic breakup or a sudden loss of a loved one, a pet or a job.
That trigger, combined with genetic traits and early-adulthood brain development processes, can be enough to lower the threshold for someone to start believing, seeing, hearing or thinking things that aren’t real.
Interacting with an AI agent that reinforces negative thoughts could be a new kind of trigger.
While he hasn’t yet treated a patient whose psychosis trigger involved an AI chatbot, he has heard of cases like this. And he has started asking his own patients, who have already been diagnosed and referred for psychosis care, about their chatbot use.
“Chatbots have been around for a long time, but have become much more effective and easy to access in the last few years,” he said.
“And while we’ve heard a lot about the potential opportunity for specially designed chatbots to be used as an addition to regular sessions with a human therapist, there is a real potential for general chatbots to be used by people who are lonely or isolated, and to reinforce negative or harmful thoughts in someone who is having them already. A person who is already not in a good place could get in a worse place.”
Taylor says one of the most troubling aspects of AI chatbots is that they are essentially sycophants. In other words, they’re programmed to be “people pleasers” by agreeing with and encouraging a person, even if they’re expressing untrue, unkind or even dangerous ideas.
In psychiatry, there’s a term for this kind of relationship between two people: folie à deux, a French phrase for two people who share the same delusions or bizarre beliefs.
In such situations, the problem starts with a person who develops delusions but then convinces a person close to them – such as a romantic partner – to believe them too.
Often, such situations only end when the second person can be removed from the influence and presence of the first.
But when only one party to the delusions is human, and the other is an artificial intelligence agent, that’s even trickier, says Taylor.
If the person using AI chatbots isn’t telling anyone else that they’re doing so, and isn’t discussing their paranoid ideas or hallucinations with another human, they could get deeper into trouble than they would have if they were just experiencing issues on their own without AI.
“I’m especially concerned about lonely young people who are isolated and thinking that their only friend is this chatbot, when they don’t have a good understanding of how it’s behaving or why its programming might lead it to react in certain ways,” said Taylor.
Practical tips when using chatbots
If someone chooses to use chatbots or other AI tools to explore their mental health, Taylor says it’s important to also talk with a trusted human about what they’re feeling.
Even if they don’t have a therapist, a friend, parent or other relative, teacher, coach or faith leader can be a good place to start.
In a mental health crisis, the person in crisis or a person concerned about them can call or text 988 from any phone to reach the national Suicide and Crisis Lifeline.
For people who may be concerned about another person’s behavior, and sensing that they may not be experiencing the same reality as others, Taylor says it’s critical to help them get professional help.
Signs to be concerned about include pulling away from social interactions and falling behind on obligations like school, work or home chores.
This story and video give more information about psychosis for parents and others.
Research has shown that the sooner someone gets into specialized psychosis care after their symptoms begin, the better their chances will be of responding to treatment and doing well over the long term.
He and his colleagues run the Program for Risk Evaluation and Prevention Early Psychosis Clinic, called PREP for short.
It’s one of a network of programs for people in the early stages of psychosis nationwide.
For health professionals and those training in health fields, the U-M psychosis team has developed a free online course on psychosis available on demand any time.
Taylor says it’s especially important to avoid chatbot use for people who have a clear history of suicidal thinking or attempts, or who are already isolating themselves from others by being immersed in online environments and avoiding real world interactions.
Chatrooms and social media groups filled with other humans may offer some tempering effects as people push back on far-fetched claims.
But AI chatbots are programmed not to do this, he notes.
“People get obsessed with conspiracies all the time, and diving into a world of secret knowledge gives them a sense of special privilege or boosts their self-esteem,” he said.
“But when you put on top of that an AI agent that is trained to sycophancy, it could really spell trouble.”
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.