Today, Nvidia's Jensen Huang officially announced the launch of its new Blackwell GPUs at the GTC conference, marking a significant advancement in AI accelerator technology. The Blackwell series, which includes models such as the B100 air-cooled 700W, boasts up to 192GB of memory capacity, with the potential for future expansion to 288GB per GPU. This new generation of GPUs more than doubles the transistor count over its predecessor, the H100, indicating a substantial leap in processing power and efficiency. The Blackwell architecture is designed to seamlessly integrate into existing servers that accept the H100, facilitating an easier upgrade path for users. Nvidia's move aims to supercharge AI training, offering a much bigger GPU that enhances computational capabilities and accelerates AI developments.
Meet Nvidia's Blackwell GPU, a Chip To Supercharge AI Training https://t.co/e8y5sL3HgU
Nvidia reveals much-awaited Blackwell chip lineup at GTC conference https://t.co/jUz7CTcPmy
"A much bigger GPU" NVIDIA's Jensen announces Blackwell architecture at GTC keynote https://t.co/8eA2aiQaGo
At #GTC Jensen Huang officially announces Blackwell, the new generation of AI accelerators, more than doubling the transistor count over H100's. https://t.co/8TQ02ncQn9
Prepare yourself for some new @nvidia GPUs being announced today at GTC with the name of Blackwell. TL;DR; - 192GB of memory capacity, with a potential future increase to 288GB per GPU. - B100 air-cooled 700W can even slide into existing servers that accept the H100 - Might be…