Loading...
Groq Inc., an emerging player in the AI accelerator market, has recently demonstrated promising results in a new Large Language Model (LLM) benchmark, surpassing industry averages. The company's LPU Inference Engine has been recognized for its exceptional speed in running LLMs, with Groq expressing gratitude for the third-party validation of its technology. Notably, Groq's performance on the new benchmark for MixtralAI's Mixtral 8x7B required the extension of the entire axis, indicating a significant leap in inference speed. Additionally, Groq has launched its Mixtral 8x7B Instruct API, setting a new LLM throughput record of 430 tokens/s and offering competitive pricing at $0.27 USD per 1M tokens, amongst the lowest prices on offer for Mixtral 8x7B. This achievement underscores Groq's potential as a real contender among AI accelerators.
"Great scott!" no "jiggawatts" needed! https://t.co/cBq71kjDdH > pick model > play in #AI @MixtralAI users, experience the blazing fast low-latency speed running one of the top industry-performing open-source models on @GroqInc and their LPU™ Inference Engine cloud solution. https://t.co/UifG0mnHfo
430 tokens/s on Mixtral 8x7B: @GroqInc sets new LLM throughput record Groq has launched its new Mixtral 8x7B Instruct API, delivering record performance on its custom silicon. Pricing is competitive at $0.27 USD per 1M tokens, amongst the lowest prices on offer for Mixtral 8x7B.… https://t.co/8DwYLofIM8
Groq developed TPU to accelerate AI. @jsteeman @DutchITLeaders https://t.co/G8GKWRjntV @GroqInc #AI #LLM #HPC #LPU #Inference #ITPT @ITPressTour 53rd Edition in California https://t.co/qyt1R14Q23
Groq wants to accelerate LLMs with its own chip @marchusquinet @datanews_nl https://t.co/BjJOUmvJ26 @GroqInc #AI #LLM #HPC #LPU #Inference #ITPT @ITPressTour 53rd Edition in California https://t.co/D38HuHIvHm
https://t.co/xiC9o1slP9 @GroqInc absolutely crushing it on the new benchmark for @MixtralAI 8x7B. "The entire axis had to be extended, a lot." Look at that #Inference #Speed. #betterongroq #groqspeed
Groq Shows Promising Results in New LLM Benchmark, Surpassing Industry Averages https://t.co/MHhmUBcvV4 @GroqInc #datanami #TCIwire
Groq Shows Promising Results in New LLM Benchmark, Surpassing Industry Averages https://t.co/vB9CbANxJA @GroqInc #HPC #TCIwire
Groq Shows Promising Results in New LLM Benchmark, Surpassing Industry Averages https://t.co/8UTcpHVYnT @GroqInc #AIwire #TCIwire
"It is incredibly rewarding to have a 3rd party validate that the LPU Inference Engine is the fastest option for running LLMs & we are grateful to https://t.co/3Mef3V7SF5 for recognizing Groq as a real contender among AI accelerators.” For more: https://t.co/6BeQb7Oj89