Microsoft's Orca-Math AI, a specialized small language model, has been developed to solve high-quality mathematical word problems that require multi-step reasoning. The model outperforms much larger models and demonstrates the potential of using feedback to improve language models. Orca-Math is based on Mistral and uses a multi-pass approach with ContextualAI's KTO to achieve superior performance in math word problems.
“Orca-Math: Demonstrating the potential of SLMs with model specialization” Dataset: https://t.co/1NRDMCQjAD
MathScale Scaling Instruction Tuning for Mathematical Reasoning Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving. However, their proficiency in solving mathematical problems remains inadequate. https://t.co/lVLWdmZFzL
Microsoft's new Orca-Math model based on Mistral uses multiple passes with @ContextualAI's KTO approach to achieve superior performance on math word problems. https://t.co/Hif3tpznF9
Microsoft's new Orca-Math AI outperforms models 10x larger https://t.co/ow7TqKbVGB
Microsoft’s Orca-Math, a specialized small language model, outperforms much larger models in solving math problems that require multi-step reasoning and shows the potential of using feedback to improve language models. Learn more. https://t.co/lz72MdVQWy https://t.co/Pm5ooTjhLB
Orca-Math, a small language model that Microsoft trained on a relatively small dataset, outperforms several much larger models in solving high-quality mathematical word problems that require multi-step reasoning. Learn more. https://t.co/IqgNsQKb2b https://t.co/74XKvBRrPg
✨ orca-math-word-problems-200k dataset released - A high quality synthetic dataset of 200K math problems created using a multiagent setup where agents collaborate to create the data, 📌 Contains ~200K grade school math word problems. All the answers in this dataset is generated…