NousResearch has released Nous-Hermes 2, a new flagship LLM model trained with RLHF, beating Mixtral Instruct in popular benchmarks. The model comes in SFT only and SFT+DPO variants. It's available on Together API, OpenRouter, and HuggingChat. The 4-bit quantized DPO model is also available in the 🤗 MLX Community. Open-source AI community is excited about this development.
Fun demo: Chat in your browser with @NousResearch 4-bit Mixtral running in MLX. UI powered by @streamlit Code: https://t.co/qBonrHjHoT thanks to @codaz! https://t.co/KYhBCrgJ83
An mlx version of Hermes Mixtral has appeared! 🧑💻 https://t.co/0WIJPm0it3
4-bit quantized DPO Nous-Hermes-2 Mixtral 8x7B already uploaded to the 🤗 MLX Community! https://t.co/vnqSA3XR8H Only two steps to try it out: https://t.co/ijPdvdujfd https://t.co/RX8SWY2gSl
Nous-Hermes 2 on Mixtral is now available to chat with on @huggingface's HuggingChat! Try it now here: https://t.co/pmdcNBXWdc Thank you HuggingFace for putting Hermes 2 on your platform! https://t.co/t5dBIyAAsG https://t.co/XJv9E7bikC
Nous Hermes 2 Mixtral is now live on OpenRouter 2M tokens in just over an hour! Try it for free here: https://t.co/KiWsQiCM5j https://t.co/tZP6t6a9l5 https://t.co/88VXRuGoxw
Nous-Hermes 2 Mixtral 8x7B: ollama run nous-hermes2-mixtral Thank you @NousResearch, @Teknium1 and team for building the model. 👇 https://t.co/z9pC2LZEWZ
Really great news for opensource LLMs "the first model to beat Mixtral Instruct in the bulk of popular benchmarks!" https://t.co/bYY8nDKwI6 https://t.co/RKdwGTY0ab
Nous Hermes 2 beats Mistral-Instruct MOE and becomes the best open-source model. Open-source AI continues to make strides on a daily basis. The latest release from the Nous team beats the best open-source model! 10-shot MMLU is better, but it would have been for them to… https://t.co/Jngf7rLU0G
Wake up! The most powerful open source LLM just dropped! Nous-Hermes-2 Mixtral 8x7B. Comes either with SFT + DPO or just SFT. You can safely expect it to be the best open chatbot at the moment. --- Try it: - DPO + SFT: https://t.co/7fnTaW3FsO - SFT: https://t.co/B1m6EnuFSj… https://t.co/bmEj6bz8Q0
Wake up! The most powerful open source LLM just dropped! Nous-Hermes-2 Mixtral 8x7B. Comes either with SFT + DPO or just SFT. You can safely expect it to be the best open chatbot at the moment. Try them: - DPO + SFT: https://t.co/7fnTaW3FsO - SFT: https://t.co/B1m6EnuFSj https://t.co/bmEj6bz8Q0
Wake up! The most powerful open source LLM just dropped! Nous-Hermes-2 Mixtral 8x7B. Comes either with SFT + DPO or just SFT. [Now we can finally compare!] You can safely expect this to be the best open chatbot at the moment. https://t.co/bmEj6bz8Q0
PSA mlx-lm now works with Mixtral-8x7B and Phi-2: pip install -U mlx-lm Pre-quantized models are in the 🤗 MLX Community https://t.co/dUgErUXnM3 Here's 4-bit quantized Mixtral running on an M2 Ultra, (it would probably run fine on a 64GB machine): https://t.co/xeGaTM8RqL
Nous-Hermes 2 on Mixtral 8x7B, the latest flagship LLM from @NousResearch is available on Together API minutes after release! Enjoy! https://t.co/m8qVVEOpAJ
It's finally time! Our Mixtral 8x7B model is up and available now! Nous-Hermes-2 Mixtral 8x7B comes in two variants, an SFT+DPO and SFT-Only, so you can try and see which works best for you! It's afaik the first Mixtral based model to beat @MistralAI's Mixtral Instruct model,… https://t.co/BRjkcNWEkW https://t.co/FSYSNMfyDn
Introducing our new flagship LLM, Nous-Hermes 2 on Mixtral 8x7B. Our first model that was trained with RLHF, and the first model to beat Mixtral Instruct in the bulk of popular benchmarks! We are releasing the SFT only and SFT+DPO model, as well as a qlora adapter for the DPO…