Alibaba-Qwen has released a new model, Qwen1.5-32B, with impressive multilingual capabilities, competitive performance, and mid-size parameters. The model outperforms Mixtral on the Open LLM Leaderboard and has strong metrics across various benchmarks.
We've been using @ollama to run open-source LLMs locally. It's great that now it supports embedding models. The blog post contains a small RAG example: https://t.co/Q5PI28qNlP https://t.co/52oIO7HL0P
Cool new feature on the Open LLM Leaderboard by @ailozovskaya ! Search by license, to find the most open open source models for your use cases 🔍 https://t.co/Os2cLHWTpR
There's now a collection of open Google models on the Open LLM Leaderboard! You'll notably find the brand new RecurrentGemma models there 🤗 https://t.co/phlmpbjGJ9 https://t.co/gseCkdUPnT
🔥 Stability AI just released Stable LM 2 12B 🔥 Powerful 12 billion parameter language models trained on multilingual data in Spanish, English, German, Italian, French, Portuguese, and Dutch, featuring a base and instruction-tuned model. https://t.co/mdoQo53LOB
Stable LM 2 - 12B ⚡ > Multilingual - English, Spanish, German, Italian, French, Portuguese, and Dutch. > Comparable performance to Mixtral. > Open access. > Base and Instruction tuned models released. > Instruction-tuned versions can be used for tool usage and function calling.…
Stable LM 2 12B is a pair of powerful 12 billion parameter language models trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch, featuring a base and instruction-tuned model. You can now try the model here: https://t.co/6QGjTkjgOc…
🌟Introducing 𝐒𝐚𝐢𝐥𝐨𝐫: new family of 𝐋𝐋𝐌𝐬 ranging from 0.5B to 7B params! 🌍Pre-trained from Qwen1.5 models, it demonstrates exceptional performance on South-East Asian languages. 🚀Takes us one step closer to multilingual LLMs that serve the needs of a region & beyond! https://t.co/ED9shtmzKY
📚 Are you keeping track of the top LLMs in the open source space? Here’s a list of 12 LLMs, diving into their architectural designs, benchmark scores, licensing details, training datasets (where available), and interesting characteristics. ⬇️⬇️⬇️ #generativeai…
Alibaba-Qwen Releases Qwen1.5 32B: A New Multilingual dense LLM with a context of 32k and Outperforming Mixtral on the Open LLM Leaderboard #DL #AI #ML #DeepLearning #ArtificialIntelligence #MachineLearning #ComputerVision https://t.co/GcGoA6TdEQ
Alibaba-Qwen Releases Qwen1.5 32B: A New Multilingual dense LLM with a context of 32k and Outperforming Mixtral on the Open LLM Leaderboard Quick read: https://t.co/GUSQPacRoW Demo: https://t.co/aH8eS8N4hh @Alibaba_Qwen #ArtificialInteligence #LLMs
Qwen1.5-32B is here! - Very strong metrics across benchmarks (code, reasoning, and math benchmarks) - Mid-size (32 billion params, easier to run on-device) - Decent multilingual capabilities Website: https://t.co/jrb1N2753k Demo: https://t.co/Mn3RyxZ9aU https://t.co/MAh1bwK6LW
Today, we release a new model of the Qwen1.5 series: Qwen1.5-32B and Qwen1.5-32B-Chat! Blog: https://t.co/HG9xXU3Bn1 HF: https://t.co/oE1DBcrRNq , search repos with “Qwen1.5-32B” in model names. GitHub: https://t.co/5vKV1KFwfy For a long time, our users have been requesting us… https://t.co/EtpmtB36rT
New open LLM from @Alibaba_Qwen! Qwen1.5 32B is a new multilingual dense LLM with a context of 32k, outperforming Mixtral on the open LLM Leaderboard! 🌍🚀 TL;DR 🧮 32B with 32k context size 💬 Chat model used DPO for preference training 📜 Custom License, commercially useable… https://t.co/8FkH021SPz
🏠 Welcome to the Qwen1.5 family, the new dense model member, Qwen1.5-32B! This model has shown competitive performance comparable to the 72B model, especially impressing in language understanding, multilingual support, coding and mathematical abilities. But beyond that,… https://t.co/O4gcL1WeDM
Open source LLM are cool Made this little Proof of concept @ollama @LangChainAI and @Gradio https://t.co/fWfhTIxC93