Microsoft Research has released Orca 2, a small language model designed to teach advanced reasoning skills to smaller language models with less than 10 billion parameters. Orca 2 aims to outperform larger counterparts and has excelled in 15 benchmarks across 100 tasks. The model weights are publicly available, and it emphasizes techniques like step-by-step processing and recall-then-generate methods.
Microsoft Releases Orca 2: Pioneering Advanced Reasoning in Smaller Language Models with Tailored Training Strategies Quick read: https://t.co/qLEktbV2WR Paper: https://t.co/D6cLv2hVyI @MSFTnews #ArtificialInteligence #DataScience https://t.co/TVRX8ZmNz9
Meet Orca 2 by @MSFTResearch It elevates the reasoning powers of smaller language models. ▪️ Emphasizes techniques like step-by-step processing ▪️ Teaches recall-then-generate methods ▪️ Excels in 15 benchmarks across 100 tasks 🔥 Model weights are publicly available! 1/2 https://t.co/T5cN5GXumH
On Teaching Small Language Models How to Reason: Orca 2 @MSFTResearch https://t.co/RaBYnjogUf
Orca 2: Teaching Small Language Models How to Reason Great research from @MSFTResearch team - @Arindam1408, @AhmedHAwadallah, Andres Codas, Luciano Del Corro, Hamed Khanpour, Shewti Mahajan, and @zzzzgq. Play with @Gradio Demo for Orca-13B on Spaces- https://t.co/ybn3Bsmqky
Microsoft releases Orca 2, a pair of small language models that outperform larger counterparts. #AI #TechAI #LearningAI #GenerativeAI #DeepbrainAI #Microsoft #Orca https://t.co/2KuPKvoTEi
Orca 2: Teaching Small Language Models How to Reason https://t.co/smqdZtNj9X
[LG] Orca 2: Teaching Small Language Models How to Reason A Mitra, L D Corro, S Mahajan, A Codas… [Microsoft Research] (2023) https://t.co/bKfslJMMTH - Orca 2 aims to teach smaller language models (less than ~10B parameters) advanced reasoning skills, allowing them to match or… https://t.co/rvwiI0rQj3 https://t.co/zmN6cqL11g
Orca 2: Teaching Small Language Models How to Reason https://t.co/xsrM62T2jU
Orca 2: Teaching Small Language Models How to Reason Mitra et al.: https://t.co/ACy0eWpkhE #ArtificialIntelligence #DeepLearning #MachineLearning https://t.co/nYBAiv56Sq
Orca 2: Teaching Small Language Models How to Reason paper page: https://t.co/WA8Sxgh7YO Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we… https://t.co/Mwbi97XMhR https://t.co/FXgMbS3rmh