Adverse experiences, protective factors, and obesity in Latinx and Hispanic youths
Peer-Reviewed Publication
Updates every hour. Last Updated: 22-Dec-2025 18:11 ET (22-Dec-2025 23:11 GMT/UTC)
Mathematical reasoning is a fundamental aspect of intelligence, encompassing a spectrum from basic arithmetic to intricate problem-solving. Recent investigations into the mathematical abilities of large language models (LLMs) have yielded inconsistent and incomplete assessments. In response, we introduce MathEval, a comprehensive benchmark designed to methodically evaluate the mathematical problem-solving proficiency of LLMs in various contexts, adaptation strategies, and evaluation metrics. MathEval consolidates 22 distinct datasets, encompassing a broad spectrum of mathematical disciplines, languages (including English and Chinese), and problem categories (ranging from arithmetic and competitive mathematics to higher mathematics), with varying degrees of difficulty from elementary to advanced. To address the complexity of mathematical reasoning outputs and adapt to diverse models and prompts, we employ GPT-4 as an automated pipeline for answer extraction and comparison. Additionally, we trained a publicly available DeepSeek-LLM-7B-Base model using GPT-4 results, enabling precise answer validation without requiring GPT-4 access. To mitigate potential test data contamination and truly gauge progress, MathEval incorporates an annually refreshed set of problems from the latest Chinese National College Entrance Examination (Gaokao-2023, Gaokao-2024), thereby benchmarking genuine advancements in mathematical problem solving skills.
The Hertz Foundation is proud to recognize the 16 most recent graduates of the Hertz Fellowship in applied science, mathematics and engineering.
Scientists find that pure mathematical formulae used to calculate the value of pi 100 years ago has connections to fundamental physics of today – showing up in theoretical models of percolation, turbulence, and certain aspects of black holes