The AI community is celebrating the emergence of NeuralBeagle14, a 7B parameter model that has topped the Open LLM Leaderboard. It's recognized as the best model under 13B parameters and its 20B version is on par with the larger Yi-34B model. The model also ranks 10th overall on the leaderboard. Its performance is lauded for speed, efficiency, and power, with a video demonstration showing a speed of 64.86 tokens/s, power consumption of 31W, and memory usage of approximately 4.5GB on an M3 Max 40GPU. NeuralBeagle14 is competitive with Gemini Pro and significantly outperforms GPT-3.5 on the EQ-Bench. The open-source model Mixtral 8x7b is also noted as the best in the Chatbot Arena Leaderboard, closely following GPT-4. The success of NeuralBeagle14 is attributed to open collaboration among @Teknium1, @intel, and @argilla_io. Moreover, the model's 4-bit version is efficient enough to run on an M2 with just 8GB, showcasing its potential for local use on smaller machines. The achievements of NeuralBeagle14 exemplify the power of open-source collaboration in the AI field.
NeuralBeagel14 in 4-bit runs pretty fast on an M2 with just 8GB. Pretty cool that you can have a GPT 3.5 caliber model with only 7B params running on such a small machine. 4-bit and FP16 models are in the π€ MLX community: https://t.co/dUgErUXnM3 Realtime: https://t.co/8qbYUpbzRn https://t.co/wAiOypsKnM
NeuralBeagel14 in 4-bit runs pretty fast on an M2 with just 8GB. Pretty cool that you can have a GPT 3.5 caliber model with only 7B params running such a small machine. 4-bit and FP16 models are in the π€ MLX community: https://t.co/dUgErUXnM3 Realtime: https://t.co/6ej5Yukmt0 https://t.co/wAiOypsKnM
NeuralBeagel14 in 4-bit runs pretty fast on an M2 with just 8GB. Pretty cool that you can have a GPT 3.5 caliber model with only 7B params running all local on such a small machine. 4-bit and FP16 models are in the π€ MLX community: https://t.co/dUgErUXnM3 Realtime: https://t.co/wci6O2R9vY https://t.co/wAiOypsKnM
Is 2024 the year an open-source dethrones GPT-4? Today, Mixtral 8x7b is the best open-source LLM out there. It's the best open-source model in the Chatbot Arena Leaderboard. No other open-source model is even close. And Mixtral is not far behind GPT-4! Here is something to⦠https://t.co/DydMV4N7na
Fantastic work @maximelabonne ! It's the best 7b model I tested so far. Flawlessly followed a complex prompt. https://t.co/9Omj3ADKAZ https://t.co/kZgXO8ozhz
π NeuralBeagel14 is competitive with Gemini Pro and significantly better than GPT-3.5 on the EQ-Bench. @erhartford's Doplhin-2.2-70b is also very close to Mistral Medium! π EQ-Bench: https://t.co/F3sCzUoKhe π Paper: https://t.co/vs5xoTmN2F https://t.co/GuiywkNBNt
There is a new leader in the 7b parameters LLM model space: NeuralBeagle14! @maximelabonne you are on fire π₯π₯π₯ Here a video 1x speed of q4_0 version on M3 Max 40GPU - Speed 64,86 tokens/s π₯ - Power Consumption 31W π - Memory ~4.5GB Impressive capabilities for a 7b model! https://t.co/w7W3rmmN1k
I was eager to test new NeuralBeagle14 from @maximelabonne so I created an @ollama ready version here. Give it a try it's fast and powerful! Have fun! https://t.co/LTIcaKzUMy
NeuralBeagle14-7B is a the best-performing 7B parameter model on the Open LLM Leaderboard. It also ranks as the 10th best-performing model overall on the Open LLM Leaderboard. In just 7B parameters! Tiny open source continues to defy experts that we need big models. https://t.co/Hmyf5PCBIC
Excellent thread on the open-source collaboration among @Teknium1, @intel, and @argilla_io that led to NeuralBeagle14. This model ranks at the top of every benchmark I've seen with just 7B parameters! https://t.co/2jbnV9q4ZA
π₯ Open source, open datasets & open collaboration go a long way πΏThe story behind NeuralBeagle14, a top performing 7B model released by @maximelabonne https://t.co/XVDHJSMeuX
There is a new top pretrained model on the leaderboard π Congrats to @intern_lm and @OpenMMLab team for their new internlm2. With the 7b version being the best model under 13b and the 20b being on par with Yi-34B ! https://t.co/mQIuzmjD4b