image: Results from the research team’s brain-activity simulator show regions of brain where activity was significantly predicted by the simulator from features of music and movement, employing cross-modal AI generation (center). Street dance invoked the strongest response from the regions.
Credit: 2025 Takagi et al.
Dance is a form of cultural expression that has endured all of human history, channeling a seemingly innate response to the recognition of sound and rhythm. A team at the University of Tokyo and collaborators demonstrated distinct fMRI activity patterns in the brain related to a specific audience’s level of expertise in dance. The findings were born from recent breakthroughs in dance motion-capture datasets and AI (artificial intelligence) generative models, facilitating a cross-modal study characterizing the art form’s complexity.
Previous studies on dance have typically been limited to artificially controlled movement or music in isolation, or coarse binary descriptors from categorized clips. The ability to elicit holistic, cross-modal correspondence of real-world performances to local brain activity allowed for the capture of fine-grained, high-dimensional relationships in dance. This research project, led by Professor Hiroshi Imamizu of the University of Tokyo, Associate Professor Yu Takagi of the Nagoya Institute of Technology and their team, builds upon quantitative encoding advances in AI-based naturalistic modeling to compare brain responses to stimuli.
“In our research we strived to understand how the human brain directs movement of the body. As an everyday life example, dance provided the perfect theme,” said Imamizu. “Our team had great passion for genres like street dance and ballet, and by collaborating with street dance experts, the research soon took a life of its own.”
According to the team, a major problem to date was that in order to identify and respond to the multitude of stimuli in the real world, humans must process a wealth of perceptual information.
“That’s where the release of the AIST Dance Video Database was a stroke of fortune for us. It has over 13,000 recordings covering 10 genres of street dance,” said Imamizu. “It also led to an AI model which generates choreography from music. It almost felt that our research was being pushed by this new era of technology itself.”
In describing the study, the researchers said one of the underlying problems they would like to solve is to understand how the brain and AI correspond to each other. Can AI models represent the human mind? And conversely, can brain functions be used to grasp the inner working of AI?
To answer this, the team recruited 14 participants of mixed dance backgrounds and monitored their brain responses while viewing 1,163 dance clips of varied dancers and styles.
“By linking a choreographing AI to fMRI, or functional magnetic resonance imaging, a technique that can visualize active regions of the brain by recognizing small changes in blood flow, we could pinpoint where the brain binds music and movement,” said Takagi. “Cross‑modal features consistently predicted activity in higher‑order association areas better than motion‑only or sound‑only features — evidence that integration of different sensory modalities such as sight and sound is central to how we experience dance.”
The findings also suggested that the model’s next-motion prediction architecture aligns well with human cognition, revealing parallels between how biological and artificial systems process and integrate audiovisual information.
Furthermore, to identify how dance features mapped to brain responses and emotional experiences, the team created a list of concepts informed by expert dancers with multiple rating scales. Feedback results from an online survey were processed through a brain‑activity simulator they’d developed, showing that different impressions correspond to distinct, distributed neural patterns, in which aesthetic and emotional responses were not reducible to a single scale dimension.
“Surprisingly, compared to nonexpert audiences, our brain-activity simulator was able to more precisely predict responses in experts. Even more interesting was the fact that while nonexperts exhibited individual differences in response patterns, the videos elicited a more diverse number of patterns in experts,” said Imamizu. “In other words, the results suggest that brain responses diverged rather than converged with expertise. This has very interesting implications for understanding the relation of experience and diversity of expressions in art. We believe that the freedom demonstrated to connect tightly controlled research methods with large, diverse real-world datasets opens up a new dimension of research possibilities.”
For the impassioned members of the team, the results brought them full circle. “We would love nothing more than to see our developed brain-activity simulator be used as a tool to create new dance styles which move people. We very much wish to explore applications to other forms of art also,” said Imamizu.
###
Journal article: Yu Takagi, Daichi Shimizu, Mina Wakabayashi, Ryu Ohata, and Hiroshi Imamizu, “Cross-modal deep generative models reveal the cortical representation of dancing”, Nature Communications, November 18, 2025, DOI: 10.1038/s41467-025-65039-w/
Link: https://www.nature.com/articles/s41467-025-65039-w
Funding: This work was supported by JSPS KAKENHI grant numbers 19H05725, JP19H05725, JP24H00172 and 22K03073, and PRESTO grant number JP-MJPR23I6.
Useful links:
Graduate School of Humanities and Sociology - https://www.l.u-tokyo.ac.jp/eng/index.html
Department of Psychology - https://www.l.u-tokyo.ac.jp/psy/en/overview/
AIST Dance Video Database - https://aistdancedb.ongaaccel.jp
Research contact:
Professor Hiroshi Imamizu
Department of Psychology, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, JAPAN
imamizu@l.u-tokyo.ac.jp
Press contact:
Mr. Rohan Mehra
Strategic Communications Group, The University of Tokyo,
7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
press-releases.adm@gs.mail.u-tokyo.ac.jp
About The University of Tokyo:
The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on X (formerly Twitter) at @UTokyo_News_en.
Journal
Nature Communications
Method of Research
Experimental study
Subject of Research
People
Article Title
Cross-modal deep generative models reveal the cortical representation of dancing
Article Publication Date
18-Nov-2025