Towards fair lights: a multi-agent masked deep reinforcement learning for efficient corridor-level traffic signal control
Peer-Reviewed Publication
Updates every hour. Last Updated: 2-Jan-2026 13:11 ET (2-Jan-2026 18:11 GMT/UTC)
Researchers at the University of Melbourne have developed a new AI-based traffic signal control system called M2SAC that improves both fairness and efficiency at urban intersections. Unlike traditional systems focused only on cars, M2SAC accounts for pedestrians, buses, and other users. A key innovation is the phase mask mechanism, which dynamically adjusts green light timings to reduce delays. Tested on real Melbourne traffic data, the model outperformed existing methods, cutting congestion and balancing traffic flow more equitably. The approach supports smarter, fairer, and more inclusive transport systems for modern cities.
To address this challenge, researchers at Korea Advanced Institute of Science and Technology (KAIST) and Donghai Laboratory developed a new model called ProChunkFormer, which reconstructs vehicle trajectories from sparse and noisy GPS data, enabling more accurate mobility analysis and intelligent transportation planning.
The heterogeneity causes spatiotemporal inconsistencies in multimodal data, posing challenges for existing methods in multimodal feature extraction and alignment. First, in the temporal dimension, the microsecond-level temporal resolution of event data is significantly higher than the millisecond-level resolution of RGB data, resulting in temporal misalignment and making direct multimodal fusion infeasible. To address this issue, the researchers design an Event Correction Module (ECM) that temporally aligns asynchronous event streams with their corresponding image frames through optical-flow-based warping. The ECM is jointly optimized with the downstream object detection network to learn task-ware event representations.
In collaboration with universities across the world, Nicholas Hedger (University of Reading) and Tomas Knapen (Netherlands Institute for Neuroscience & Vrije Universiteit Amsterdam) explored the depths of the human experience. They discovered how the brain translates the visual world around us into touch, thereby creating a physical embodied world for us to experience. “This aspect of human experience is a fantastic area for AI development.”