Shrinking AI memory boosts accuracy, study finds
Reports and Proceedings
Updates every hour. Last Updated: 2-Apr-2026 18:15 ET (2-Apr-2026 22:15 GMT/UTC)
Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or help save significant amounts of energy.
Experts from University of Edinburgh and NVIDIA found that large language models (LLMs) using memory eight times smaller than an uncompressed LLM scored better on maths, science and coding tests while spending the same amount of time reasoning.
According to a UOC study, information and debate improve critical awareness in the use of artificial intelligence