Reminding people they’re talking to chatbots may be ineffective or even harmful, researchers say
Peer-Reviewed Publication
Updates every hour. Last Updated: 2-Apr-2026 11:15 ET (2-Apr-2026 15:15 GMT/UTC)
Concerns that chatbot use can cause mental and physical harm have prompted policies that require AI chatbots to deliver regular or constant reminders that they are not human. In an opinion paper publishing January 28 in the Cell Press journal Trends in Cognitive Sciences, researchers argue that these policies may be ineffective or even harmful because they could exacerbate mental distress in already isolated individuals. The researchers say that reminding chatbot users of their companions’ non-human nature may be useful in some contexts, but these reminders must be carefully crafted and timed to avoid unintended negative consequences.
Most chronic diseases don’t begin with obvious symptoms or dramatic warning signs. Instead, they develop quietly over many years, as small changes accumulate in the body. A new perspective from researchers at the Buck Institute for Research on Aging notes that modern medicine often waits until disease is well underway, arguing that new technologies could help detect risk much earlier, when prevention may be most effective.
The human genome is a long sequence of DNA scattered with innumerable genetic variants that distinguish us. Extracting information from large biobank datasets about complex traits, influenced by thousands or millions of variants, remains a challenge. Using human height as a model, researchers at the Institute of Science and Technology Austria (ISTA) have now tackled this problem and developed an enhanced algorithm, published in Cell Genomics, with potential applications in personalized medicine—and even at crime scenes.
A team led by Penn State researchers reported a novel material made of cheap, commercially available plastics that can handle four times the energy of a typical capacitor at temperatures up to 482 F.