Could traffic factors enhance autonomous vehicle safety?
Peer-Reviewed Publication
Updates every hour. Last Updated: 8-Jan-2026 02:11 ET (8-Jan-2026 07:11 GMT/UTC)
Researchers at Imperial College London, developed a new method to combine infrastructure-based traffic data with vehicle-based data. They demonstrate that adding traffic covariates increases accuracy and the use of the No-U-Turn Sampler (NUTS) reduces the computational running time.
Image reconstruction—the process of recovering clear images from incomplete or noisy data—has been advancing rapidly through deep learning. Yet most existing approaches rely on costly supervised training and lack theoretical transparency. A new survey maps the rise of unsupervised deep learning for image reconstruction, from traditional denoising-based priors to modern diffusion models. These methods learn structured visual information directly from unlabeled data, and have achieved impressive performance across various fields, including biomedical imaging and remote sensing. The study shows how unsupervised learning based image reconstruction unites neural network efficiency with solid mathematical foundations to achieve both interpretability and flexibility, offering a blueprint for next-generation imaging systems.
Researchers at the University of Melbourne have developed a new AI-based traffic signal control system called M2SAC that improves both fairness and efficiency at urban intersections. Unlike traditional systems focused only on cars, M2SAC accounts for pedestrians, buses, and other users. A key innovation is the phase mask mechanism, which dynamically adjusts green light timings to reduce delays. Tested on real Melbourne traffic data, the model outperformed existing methods, cutting congestion and balancing traffic flow more equitably. The approach supports smarter, fairer, and more inclusive transport systems for modern cities.
To address this challenge, researchers at Korea Advanced Institute of Science and Technology (KAIST) and Donghai Laboratory developed a new model called ProChunkFormer, which reconstructs vehicle trajectories from sparse and noisy GPS data, enabling more accurate mobility analysis and intelligent transportation planning.
The heterogeneity causes spatiotemporal inconsistencies in multimodal data, posing challenges for existing methods in multimodal feature extraction and alignment. First, in the temporal dimension, the microsecond-level temporal resolution of event data is significantly higher than the millisecond-level resolution of RGB data, resulting in temporal misalignment and making direct multimodal fusion infeasible. To address this issue, the researchers design an Event Correction Module (ECM) that temporally aligns asynchronous event streams with their corresponding image frames through optical-flow-based warping. The ECM is jointly optimized with the downstream object detection network to learn task-ware event representations.
In collaboration with universities across the world, Nicholas Hedger (University of Reading) and Tomas Knapen (Netherlands Institute for Neuroscience & Vrije Universiteit Amsterdam) explored the depths of the human experience. They discovered how the brain translates the visual world around us into touch, thereby creating a physical embodied world for us to experience. “This aspect of human experience is a fantastic area for AI development.”