Researchers are exploring ways to enhance mathematical reasoning in Large Language Models (LLMs) by investigating tasks involving semantic structure-mapping, scaling laws for synthetic data and reinforcement learning for math reasoning. They aim to improve LLMs' abilities by enabling them to express intermediate reasoning as images via a 'whiteboard-of-thought.'
Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads. https://t.co/szoInEEQVT
🚨 New paper! 🚨 We solve lots of tasks posed in words by thinking visually. Can LLMs? Not in text, but we can unlock this ability with images! Introducing whiteboard-of-thought, enabling MLLMs to express intermediate reasoning as images via code! 🔗 https://t.co/J0WUeVz3p3 🧵 https://t.co/Z38whXkVG6
🚨 Interested in synthetic data and LLM reasoning? Our new work studies scaling laws for synthetic data and RL for math reasoning. TLDR: Step-level RL (per-step DPO in fig) on self-generated answers improves sample efficiency of synthetic data by 8x! https://t.co/xyTGdKaxak 1/🧵 https://t.co/K14BS7pTE5
📄New preprint with Sam Musker, Alex Duchnowski & Ellie Pavlick @Brown_NLP! We investigate how humans subjects and LLMs perform on novel analogical reasoning tasks involving semantic structure-mapping. Our findings shed light on current LLMs' abilities and limitations. 1/
1/n Enhancing Mathematical Reasoning in Large Language Models Large Language Models (LLMs) face significant challenges when it comes to complex mathematical reasoning, particularly in solving Olympiad-level problems. The paper "Accessing GPT-4 level Mathematical Olympiad… https://t.co/VuHwaolseZ