Comprehensive review reveals how cities can learn from each other to build smarter, more sustainable urban systems
Peer-Reviewed Publication
Updates every hour. Last Updated: 2-Jan-2026 22:11 ET (3-Jan-2026 03:11 GMT/UTC)
Cross-city transfer learning (CCTL) has emerged as a crucial approach for managing the growing complexity of urban data and addressing the challenges posed by rapid urbanization. This paper provides a comprehensive review of recent advances in CCTL, with a focus on its applications in urban computing tasks, including prediction, detection, and deployment. We examine the role of CCTL in facilitating policy adaptation and influencing behavioral change. Specifically, we provide a systematic overview of widely used datasets, including traffic sensor data, GPS trajectory data, online social network data, and map data. Furthermore, we conduct an in-depth analysis of methods and evaluation metrics employed across different CCTL-based urban computing tasks. Finally, we emphasize the potential of cross-city policy transfer in promoting low-carbon and sustainable urban development. This review aims to serve as a reference for future urban development research and promote the practical implementation of CCTL.
Socially compliant automated vehicles (SCAVs) mark a new frontier in human-centric driving automation. Integrating sensing, socially aware decision-making, safety constraints, spatial-temporal memory, and bidirectional behavioral adaptation, the proposed framework aims for AVs to interpret, learn from, and respond to human drivers. By embedding social intelligence into automated driving systems, this research paves the way for vehicles that not only drive safely but also drive socially.
Human languages are complex, rich and varied. From an information-theoretic perspective, however, they could convey the same information in a much more compact form. So why don't we speak 'digitally', encoding information in strings of ones and zeros like a computer? Michael Hahn, a linguist from Saarbrücken, has explored this question together with a research colleague from the US. They have developed a model that explains why we speak the way we do – and why we don't communicate by beeping like R2-D2 in Star Wars. Their findings have recently been published in Nature Human Behaviour.
Research from the University of Bath exposes the overlooked burdens of remote working in the Global South, revealing how it transfers economic, physiological and emotional strain to Indian IT workers supporting global firms.
New research led by the University of Plymouth, with partners at universities and healthcare facilities in the UK and USA, has found that targeted ultrasound can be used to change the function of a deep region of the human brain. Specifically, it can be used to target the nucleus accumbens, a tiny element of the human brain triggered when we experience something enjoyable, and used to help us learn behaviours that lead to rewards. With surgical treatments currently the only option to target this area of the brain, those behind the study believe it marks a turning point for neurotechnology, showing that a non-invasive ultrasound approach can influence behaviour and may one day help restore mental balance.
Researchers from Saarland University and the Max Planck Institute for Software Systems have, for the first time, shown that the reactions of humans and large language models (LLMs) to complex or misleading program code significantly align, by comparing brain activity of study participants with model uncertainty.