The release of Qwen1.5, an open-source project offering a range of artificial intelligence models, has been announced. The new version includes six different model sizes: 0.5B, 1.8B, 4B, 7B, 14B, and 72B, catering to various needs. These models include both base and chat variations and boast multilingual support. Qwen1.5 is designed to support a 32K context length and is compatible with MLX, allowing for easy installation and use on devices such as laptops. The 0.5B model is noted for its speed and low RAM usage. Qwen1.5 is reported to outperform GPT 3.5 and other models in terms of quality and has been made available on the MLX platform and the playground, with all parameter sizes ready for use. Alibaba has contributed to the release, with the 'Qwen1.5-72B-Chat' model demonstrating superior performance over competitors such as Claude-2.1 and GPT-3.5-Turbo-0613 on benchmarks like 'MT-Bench and Alpaca-Eval v2'. The release includes 30 new models with very strong metrics, and the models run natively with unspecified software or hardware.
New Qwen dropped folks! Including a tiny .5B param model 👏 All quantized and ready to go as well https://t.co/Dl6SmGO4fg
Many thanks to @awnihannun , helps me a lot with MLX! It is amazing to see Qwen1.5 running on Mac! https://t.co/iuzeMfIkHp
Alibaba releases Qwen 1.5 demo: https://t.co/goMcWMsIzT largest open-source Qwen1.5-72B-Chat, exhibits superior performance, surpassing Claude-2.1, GPT-3.5-Turbo-0613, on both MT-Bench and Alpaca-Eval v2 https://t.co/50dNuUpEBx
Qwen 1.5 models already available on https://t.co/qbIpFxCLHL playground! (all param sizes) Quick work by @togethercompute as always https://t.co/zWdSUbHVxq
Qwen1.5 is out, and already works with MLX ! pip install -U mlx-lm Models from 0.5B to 72B, all super high quality. 0.5B runs fast with MLX on my laptop, high quality, hardly any RAM: https://t.co/9OTIUoVRha https://t.co/3EHmeqffcN
Qwen1.5 is out, and already works with MLX ! pip install -u mlx-lm Models from 0.5B to 72B, all super high quality. 0.5B runs fast with MLX on my laptop, high quality, hardly any RAM: https://t.co/4gNt8mFO5x https://t.co/3EHmeqffcN
Massive release: Qwen 1.5 is out! - Models from 0.5B to 72B - Chat models released - Very strong metrics (best base model, strong chat one!) - Support long contexts 30 new models are out! https://t.co/OfqulpLPaZ Enjoy 🚀
Introducing Qwen 1.5! 🔥 > 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B. > Beats GPT 3.5, Mistral-Medium. > Multilingual support of both base and chat models. > Support 32K context length. > Base + chat model checkpoints released. > Runs natively with… https://t.co/T7vHssUeKC
🎉Happy to announce the release of Qwen1.5! This time, we directly opensource new models of 6 sizes, 0.5B, 1.8B, 4B, 7B, 14B, and 72B (including base, chat, AWQ, GPTQ, GGUF)! From small to huge! Blog: https://t.co/tsj64ZXuJ7 GitHub: https://t.co/yjcdGalMCD HF:… https://t.co/Cvs1NWAzbV
👋 Qwen's latest open source work, Qwen1.5, says hello to the world !! 👉🏻 More sizes: six sizes for your different needs. 0.5B, 1.8B, 4B, 7B, 14B and 72B, including Base and Chat. 👉🏻 Better alignment: despite still trailing behind GPT-4-Turbo, the largest open-source… https://t.co/u82vpRYDBm