Why our brain agrees on what we see: New study, at Reichman University, reveals the shared neural structure behind our common perceptions
Peer-Reviewed Publication
This month, we’re focusing on artificial intelligence (AI), a topic that continues to capture attention everywhere. Here, you’ll find the latest research news, insights, and discoveries shaping how AI is being developed and used across the world.
Updates every hour. Last Updated: 20-Nov-2025 08:11 ET (20-Nov-2025 13:11 GMT/UTC)
How is it that we all see the world in a similar way? Imagine sitting with a friend in a café, both of you looking at a phone screen displaying a dog running along the beach. Although each of our brains is a world unto itself, made up of billions of neurons with completely different connections and unique activity patterns, you would both describe it as: “A dog on the beach.” How can two such different brains lead to the same perception of the world?
Research from Hanyang University ERICA examines how customers respond when witnessing another customer mistreat a service robot. The study identifies two psychological pathways—behavioral contagion, which normalizes incivility, and empathy, which fosters prosocial behavior. These responses are influenced by the robot’s anthropomorphism and the observer’s moral identity, offering insights to guide ethical robot design and service management in hospitality, retail, and healthcare.
A new commentary by Professor Qiming Qin of Peking University, recently published in the Journal of Geo-information Science, delivers an independent review of AlphaEarth Foundations (AEF), a large-scale remote sensing foundation model from Google DeepMind. The article discusses how AEF fuses diverse data — including optical imagery, synthetic aperture radar (SAR), LiDAR, climate simulations, and text — into a single 64-dimensional embedding field to improve data integration, semantic alignment, and analytical efficiency. It points to AEF’s potential to advance land cover mapping, geoscientific modeling, and the creation of future spatial intelligence infrastructure, while warning that limited interpretability, uncertain robustness in extreme environments, and the need for independent performance validation remain major challenges.
This paper proposes GAN-Solar, a novel quality optimization model for short-term solar radiation forecasting. Based on Generative Adversarial Networks (GANs), the model addresses spatial texture degradation and intensity distortion in predictions, significantly improving forecast quality and reliability for high-precision applications.