Towards spatial computing: Recent advances in multimodal natural interaction for XR headsets
Peer-Reviewed Publication
Updates every hour. Last Updated: 3-Apr-2026 10:15 ET (3-Apr-2026 14:15 GMT/UTC)
Researchers have conducted the comprehensive review of recent advances in multimodal natural interaction techniques for Extended Reality (XR) headsets, revealing significant trends in spatial computing technologies. This timely review analyzes how recent breakthroughs in artificial intelligence (AI) and large language models (LLMs) are transforming how users interact with virtual environments, offering valuable insights for the future development of more natural, efficient, and immersive XR experiences.
Grammatical error correction (GEC) is a key task in natural language processing (NLP), widely applied in education, news, and publishing. Traditional methods mainly rely on sequence-to-sequence (Seq2Seq) and sequence-to-edit (Seq2Edit) models, while large language models (LLMs) have recently shown strong performance in this area.
In machine learning, it is often necessary to statistically compare the overall performance of two algorithms (e.g., our proposed algorithm and each compared baseline) based on multiple benchmark datasets. In this case, our proposed algorithm is typically referred to as the control algorithm. However, in some cases, it is also essential to conduct pairwise statistical comparisons of multiple algorithms without a control algorithm.
Heterogeneous graphs organize data with nodes and edges, and have been widely used in various graph-centric applications. Often, some data are omitted during manual construction, leading to data reduction and performance degeneration on downstream tasks. Existing methods recover the missing data based on the data already within a single graph, neglecting the fact that graphs from different sources share some common nodes due to scope overlap.
Current continual learning methods can utilize labeled data to alleviate catastrophic forgetting effectively. However, obtaining labeled samples can be difficult and tedious as it may require expert knowledge. In many practical application scenarios, labeled and unlabeled samples exist simultaneously, with more unlabeled than labeled samples in streaming data. Unfortunately, existing class-incremental learning methods face limitations in effectively utilizing unlabeled data, thereby impeding their performance in incremental learning scenarios.
Database optimization has long relied on traditional methods that struggle with the complexities of modern data environments. These methods often fail to efficiently handle large-scale data, complex queries, and dynamic workloads, leading to suboptimal performance and increased computational costs. To address these challenges, researchers have turned to AI4DB (Artificial Intelligence for Database), integrating advanced machine learning and deep learning techniques to enhance database optimization.