The overlooked role of grain boundary thickness in shaping mechanical properties of solid material
Peer-Reviewed Publication
Updates every hour. Last Updated: 17-Jan-2026 08:11 ET (17-Jan-2026 13:11 GMT/UTC)
Crystalline-amorphous composites comprise crystalline grains separated by amorphous boundaries. The combined role of grain size (D) and amorphous boundary thickness (l) on material properties has not been explored. Now, writing in National Science Review, a team from the Hong Kong University of Science and Technology reports simulation results of mechanical properties across the (D, l) parameter space. They identify optimal (D, l) values that provide maximum strength while also enhancing ductility, successfully circumventing the classic strength-ductility tradeoff.
Scientists have designed a gradient sodium-tin alloy/sodium bilayer anode that solves the two problems of dendrite growth and sodium loss in sodium batteries. This innovative structure features an upper "ion-buffering" layer that guides sodium ions for dendrite-free deposition and a bottom reservoir that dynamically compensates for lost sodium. The resulting batteries achieve an unprecedented energy density and ultralong cyclability in lab tests, paving the way for more powerful and durable energy storage.
Nanyang Technological University, Singapore (NTU Singapore) and Zero Gravity (0G), a decentralised AI infrastructure company, have announced a S$5 million partnership to establish a joint research hub advancing blockchain-based artificial intelligence (AI) technologies that will be more accessible and accountable. This marks 0G’s first university collaboration globally and will fund multiple projects exploring decentralised AI training, blockchain-integrated model alignment, and proof-of-useful-work consensus mechanisms.
Avram Miller, a world-renowned scientist and innovator, has been appointed by the Italian Institute of Technology (IIT) as the Institute’s first IIT Fellow. With this recognition, Avram Miller inaugurates a new IIT initiative to honor outstanding leaders in science and innovation, in line with prestigious academic and industrial traditions worldwide.
Human-machine intelligent interaction (HMII) technology, which is an advanced iteration of human-machine interaction technology, has garnered widespread attention owing to its significant achievements in healthcare and virtual reality research. Herein, this work reports a self-powered, transistor-like iontronics pressure sensor based on an MXene/Bi 2D heterojunction for advanced human-machine intelligent interaction. The free-standing device uses MXene@Zn and MXene@Bi interdigitated electrodes, a PVDF-HFP-GO solid electrolyte and a CNF isolation layer to mimic a p-type MOSFET, where pressure “gates” ion transport and generates encodable voltage outputs. The sensor exhibits 1.1 V open-circuit voltage, millisecond-level response (66.59 ms), excellent linearity (99.5%) and durability over 50,000 cycles, enabling self-powered monitoring of physiological signals and robotic motions, wireless transmission, and deep-learning-assisted gesture recognition with 95.83% accuracy in a single-device HMII system.
Existing 3D scene reconstructions require a cumbersome process of precisely measuring physical spaces with LiDAR or 3D scanners, or correcting thousands of photos along with camera pose information. The research team at KAIST has overcome these limitations and introduced a technology enabling the reconstruction of 3D —from tabletop objects to outdoor scenes—with just two to three ordinary photographs. The breakthrough suggests a new paradigm in which spaces captured by camera can be immediately transformed into virtual environments.
KAIST announced on November 6 that the research team led by Professor Sung-Eui Yoon from the School of Computing has developed a new technology called SHARE (Shape-Ray Estimation), which can reconstruct high-quality 3D scenes using only ordinary images, without precise camera pose information.
Existing 3D reconstruction technology has been limited by the requirement of precise camera position and orientation information at the time of shooting to reproduce 3D scenes from a small number of images. This has necessitated specialized equipment or complex calibration processes, making real-world applications difficult and slowing widespread adoption.
To solve these problems, the research team developed a technology that constructs accurate 3D models by simultaneously estimating the 3D scene and the camera orientation using just two to three standard photographs. The technology has been recognized for its high efficiency and versatility, enabling rapid and precise reconstruction in real-world environments without additional training or complex calibration processes.
While existing methods calculate 3D structures from known camera poses, SHARE autonomously extracts spatial information from images themselves and infers both camera pose and scene structure. This enables stable 3D reconstruction without shape distortion by aligning multiple images taken from different positions into a single unified space.