A flexible method for LoRA-based large language model fine-tuning
Peer-Reviewed Publication
Updates every hour. Last Updated: 27-Jul-2025 07:10 ET (27-Jul-2025 11:10 GMT/UTC)
In their research on enhancing LoRA-based fine-tuning for LLMs, the research team introduced an Enhanced Matrix Decomposition for single-task scenarios and a routing mechanism for multi-task learning, improving flexibility and performance without increasing computational complexity.
Time-consuming testing and computer simulations are bottlenecks in the design of new materials. A thesis from the University of Gothenburg aims to develop an AI model that can efficiently determine the durability and strength of woven composite materials.
Research team offers a comprehensive review of cognitive strategy-enhanced persuasive dialogue agents (CogAgent). They formalized cognitive strategies from cognitive psychology, summarized research, and analyzed benchmarks and evaluation metrics to advance the field.
Research team proposed POGs, providing containers with individual log configuration, storage, and view to enhance isolation, security, and efficiency with negligible performance overhead.
Research team proposed MDIDCN, a dual-channel network model for predicting miRNA-drug interactions (MDIs). Using TCN and BiLSTM, their model achieved high accuracy.
Research team proposed new data placement algorithms for scratch-pad memory (SPM) in embedded systems. Their fine-grained and multi-granularity algorithms effectively reduce data transfer and access latency, addressing inconsecutive array access and memory activation overhead.
Research team proposed an efficient algorithm for model diagnostics. Their method approximates the ranking of failure probabilities for components, achieving high accuracy and significant runtime improvements over existing algorithms.