Duncan T. Moore named Optica Honorary Member
Grant and Award Announcement
Updates every hour. Last Updated: 27-Apr-2026 05:15 ET (27-Apr-2026 09:15 GMT/UTC)
Optica names Duncan Moore as an Honorary Member for his pioneering contributions to gradient-index optics, leadership in public policy, dedicated service to the optics community and distinguished roles in academia, government and professional societies.
121 members from 23 countries have been elected as Fellows for their outstanding contributions and service to Optica and the community.
A new study from the University of Würzburg's Chair of Mathematics Education shows that AI research for STEM education focuses too much on technology and neglects the holistic development of students.
A new study reveals how our brains store and change memories. Researchers investigated episodic memory - the kind of memory we use to recall personal experiences like a birthday party or holiday.
They showed that memories aren’t just stored like files in a computer. Instead, they’re made up of different parts. And while some are active and easy to recall, others stay hidden until something triggers them.
Importantly, the review shows that for something to count as a real memory, it must be linked to a real event from the past. But even then, the memory we recall might not be a perfect copy. It can include extra details from our general knowledge, past experiences, or even the situation we’re in when we remember it.
The team say their work has important implications for mental health, education, and legal settings where memory plays a key role.
Open-source large language models (LLMs) research has made significant progress, but most studies predominantly focus on general-purpose English data, which poses challenges for LLM research in Chinese education. To address this, this research first reviewed and synthesized the core technologies of representative open-source LLMs, and designed an advanced 1.5B-parameter LLM tailored for the Chinese education field. Chinese education large language model (CELLM) is trained from scratch, involving two stages, namely, pre-training and instruction fine-tuning. In the pre-training phase, an open-source dataset is utilized for the Chinese education domain. During the instruction fine-tuning stage, the Chinese instruction dataset is developed and open-sourced, comprising over 258,000 data entries. Finally, the results and analysis of CELLM across multiple evaluation datasets are presented, which provides a reference baseline performance for future research. All of the models, data, and codes are open-source to foster community research on LLMs in the Chinese education domain.