The Open LLM Leaderboard has seen the emergence of a new top-performing model in the 7B parameter range, known as NeuralBeagle14, developed through open-source collaboration between Teknium, Intel, and Argilla.io, and released by Maxime La Bonne. This tiny open source model has been recognized for its impressive performance, ranking at the top of every benchmark observed and also placing 10th overall on the Open LLM Leaderboard. The 7B version of the model has been noted for its efficiency, with a power consumption of 31W and memory usage of approximately 4.5GB, while offering a speed of 64.86 tokens per second. Additionally, the 20B version of the internlm2 model, also recently released by the @intern_lm and @OpenMMLab team, has been highlighted for being on par with the Yi-34B model and has been acknowledged for its 200K context length for base/chat models.
Fantastic work @maximelabonne ! It's the best 7b model I tested so far. Flawlessly followed a complex prompt. https://t.co/9Omj3ADKAZ https://t.co/kZgXO8ozhz
There is a new leader in the 7b parameters LLM model space: NeuralBeagle14! @maximelabonne you are on fire π₯π₯π₯ Here a video 1x speed of q4_0 version on M3 Max 40GPU - Speed 64,86 tokens/s π₯ - Power Consumption 31W π - Memory ~4.5GB Impressive capabilities for a 7b model! https://t.co/w7W3rmmN1k
I was eager to test new NeuralBeagle14 from @maximelabonne so I created an @ollama ready version here. Give it a try it's fast and powerful! Have fun! https://t.co/LTIcaKzUMy
NeuralBeagle14-7B is a the best-performing 7B parameter model on the Open LLM Leaderboard. It also ranks as the 10th best-performing model overall on the Open LLM Leaderboard. In just 7B parameters! Tiny open source continues to defy experts that we need big models. https://t.co/Hmyf5PCBIC
Excellent thread on the open-source collaboration among @Teknium1, @intel, and @argilla_io that led to NeuralBeagle14. This model ranks at the top of every benchmark I've seen with just 7B parameters! https://t.co/2jbnV9q4ZA
π₯ Open source, open datasets & open collaboration go a long way πΏThe story behind NeuralBeagle14, a top performing 7B model released by @maximelabonne https://t.co/XVDHJSMeuX
There is a new top pretrained model on the leaderboard π Congrats to @intern_lm and @OpenMMLab team for their new internlm2. With the 7b version being the best model under 13b and the 20b being on par with Yi-34B ! https://t.co/mQIuzmjD4b
π Exciting news! @intern_lm 7/20B models are now live on the @huggingface Open LLM Leaderboard! π Highlights: - 200K context length for base/chat models. - 20B model is on par with the performance of Yi-34B. - 7B model is the best in the <= 13B range. https://t.co/AzpQhlOfhy https://t.co/8ZXQnG0GCa