Emotional support is an increasingly common reason people turn to generative artificial intelligence chatbots and wellness applications, but these tools currently lack the scientific evidence and the necessary regulations to ensure users’ safety, according to a new health advisory by the American Psychological Association.
The APA Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health examined consumer-focused technologies that people are relying on for mental health advice and treatment, even if these are not their intended purpose. However, these tools are easy to access and low cost – making them an appealing option for people who struggle to find or afford care from licensed mental health providers.
“We are in the midst of a major mental health crisis that requires systemic solutions, not just technological stopgaps,” said APA CEO Arthur C. Evans Jr., PhD. “While chatbots seem readily available to offer users support and validation, the ability of these tools to safely guide someone experiencing crisis is limited and unpredictable.”
The advisory emphasizes that while technology has immense potential to help psychologists address the mental health crisis it must not distract from the urgent need to fix the foundations of America’s mental health care system.
The report offers recommendations for the public, policymakers, tech companies, researchers, clinicians, parents, caregivers and other stakeholders to help them understand their role in a rapidly changing technology landscape so that the burden of navigating untested and unregulated digital spaces does not fall solely on users. Key recommendations include:
- Due to the unpredictable nature of these technologies, do not use chatbots and wellness apps as a substitute for care from a qualified mental health professional.
- Prevent unhealthy relationships or dependencies between users and these technologies
- Establish specific safeguards for children, teens and other vulnerable populations
“The development of AI technologies has outpaced our ability to fully understand their effects and capabilities. As a result, we are seeing reports of significant harm done to adolescents and other vulnerable populations,” Evans said. “For some, this can be life-threatening, underscoring the need for psychologists and psychological science to be involved at every stage of the development process.”
Even generative AI tools that have been developed with high-quality psychological science and using best practices do not have enough evidence to show that they are effective or safe to use in mental health care, according to the advisory. Researchers must evaluate generative AI chatbots and wellness apps using randomized clinical trials and longitudinal studies that track outcomes over time. But in order to do so, tech companies and policymakers must commit to transparency on how these technologies are being created and used.
Calling the current regulatory frameworks inadequate to address the reality of AI in mental health care, the advisory calls for policymakers, particularly at the federal level, to:
- Modernize regulations
- Create evidence-based standards for each category of digital tool
- Address gaps in Food and Drug Administration oversight
- Promote legislation that prohibits AI chatbots from posing as licensed professionals
- Enact comprehensive data privacy legislations and “safe-by-default” settings
The advisory notes many clinicians lack expertise in AI and urges professional groups and health systems to train them on AI, bias, data privacy, and responsible use of AI tools in practice. Clinicians themselves should also follow the ethical guidance available and proactively ask patients about their use of AI chatbots and wellness apps.
“Artificial intelligence will play a critical role in the future of health care, but it cannot fulfill that promise unless we also confront the long-standing challenges in mental health,” said Evans. “We must push for systemic reform to make care more affordable, accessible, and timely—and to ensure that human professionals are supported, not replaced, by AI.”