Team develops smart synthetic material inspired by octopus skin
Peer-Reviewed Publication
Updates every hour. Last Updated: 4-Apr-2026 12:15 ET (4-Apr-2026 16:15 GMT/UTC)
Paraphrase generation requires diverse generation of high-quality utterance by the given semantics, which is a challenge for traditional end-2-end text generation.
Inspired by the diffusion modeling for diverse image generations, a research team from Nanjing University led by Wei Zou managed to reconcile the quality and diversity for paraphrase generation via latent diffusion modeling (Latent Diffusion Paraphraser, LDP), and published their new research on 15 January 2026 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
Graph out-of-distribution (OOD) generalization remains a major challenge in graph neural networks (GNNs). Invariant learning, aiming to extract invariant features across varied distributions, has recently emerged as a promising approach for OOD generalization. However, the exploration within graph data remains constrained by the complex nature of graphs. The invariant features at both the attribute and structural levels, combined with the absence of prior knowledge regarding environmental factors, make the invariance and sufficiency conditions of invariant learning hard to satisfy on graph data. Existing studies, such as data augmentation or causal intervention, either suffer from disruptions to invariance during the graph manipulation process or face reliability issues due to a lack of supervised signals for causal parts.
A research team in Southwest Jiaotong Universit has published their latest study on 15 January 2026 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature, proposing a novel Bidirectional Chain-of-Thought (BiCoT) framework for zero-shot object navigation.
Chain-of-Thought (CoT) prompting has enhanced the performance of Large Language Models (LLMs) across various reasoning tasks. However, CoT still falls short in dealing with complex math word problems, as it usually suffers from three pitfalls: semantic misunderstanding errors, calculation errors, and step-missing errors. Prior studies involve addressing the calculation errors and step-missing errors, but neglect the semantic misunderstanding errors, which is the major factor limiting the reasoning performance of LLMs.
With the rapid advancement of Large Language Models (LLMs), an increasing number of researchers are focusing on Generative Recommender Systems (GRSs). Unlike traditional recommendation systems that rely on fixed candidate sets, GRSs leverage generative capabilities, making them more effective in exploring user interests.