Synthetic oxytocin may prevent anxiety caused by social stress, according to a study in an animal model
Peer-Reviewed Publication
Updates every hour. Last Updated: 5-May-2026 09:16 ET (5-May-2026 13:16 GMT/UTC)
Cash transfer programs, which provide money directly to recipients, are growing in the United States, but face significant scrutiny, with questions over their value. In addition, some contend that these payments can lead to harm—recipients, they claim, will use the cash to immediately buy alcohol or drugs, leading to injury or death. However, a new 11-year study of a long-standing cash-transfer program in Alaska finds no evidence that direct cash payments increase the risk of traumatic injury or death.
A new study by the Cluster of Excellence "The Politics of Inequality" (University of Konstanz) and the University of Lucerne shows: labour migrants who live where they work enjoy greater acceptance by locals than cross-border commuters – although their competition with the local workforce for jobs is comparable. The decisive reason for the difference is less related to economic factors than to perceptions of participation and fairness, which are also influenced by misinformation.
The ways people interact with and view nature speak volumes as to how the Earth is treated, and the severity of environmental concerns rising makes what shapes people’s view of nature a pertinent topic. Understanding how and why people might be motivated to protect nature is no small feat. Researchers have been able to present a study on 745 Japanese participants using three types of nature’s value—intrinsic, relational, and instrumental—to categorize a method to fully appreciate what goes into the construction of a human’s relationship with nature.
Medical artificial intelligence (AI) is often described as a way to make patient care safer by helping clinicians manage information. A new study by the Icahn School of Medicine at Mount Sinai and collaborators confronts a critical vulnerability: when a medical lie enters the system, can AI pass it on as if it were true? Analyzing more than a million prompts across nine leading language models, the researchers found that these systems can repeat false medical claims when they appear in realistic hospital notes or social-media health discussions. The findings, published in the February 9 online issue of The Lancet Digital Health [10.1016/j.landig.2025.100949], suggest that current safeguards do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language.