Nvidia announces the GB200 Blackwell AI chip, which will be released later this year

Date:

Nvidia CEO Jensen Huang delivers the keynote speech at the Nvidia GTC Artificial Intelligence Conference at SAP Center on March 18, 2024 in San Jose, California.

Justin Sullivan | Getty Images

- Advertisement -

on Monday announced a brand new generation of artificial intelligence chips and software to run artificial intelligence models. The announcement, made during Nvidia’s developer conference in San Jose, comes as the chipmaker desires to strengthen its position as a serious supplier to artificial intelligence firms.

Nvidia’s stock price has quintupled and total sales have greater than tripled since OpenAI’s ChatGPT sparked the AI ​​boom in late 2022. Nvidia’s high-end server GPUs are essential for training and deploying large AI models. Companies like and have spent billions of dollars purchasing tokens.

The recent generation of AI GPUs known as Blackwell. The first Blackwell chip known as GB200 and will ship later this year. Nvidia is luring its customers with more powerful chips to stimulate recent orders. For example, firms and software developers are still attempting to get their hands on the current generation of “Hopper” H100 and similar chips.

“Hopper is fantastic, but we need bigger GPUs,” Nvidia CEO Jensen Huang said Monday at the company’s developer conference in California.

Nvidia shares fell greater than 1% in prolonged trading on Monday.

The company also introduced revenue-generating software called NIM that will make it easier to deploy artificial intelligence, giving customers one more reason to persist with Nvidia chips over a growing variety of competitors.

Nvidia executives say the company is less a mercenary chip supplier and more a provider of platforms like Microsoft and Apple on which other firms can construct software.

“Blackwell is not a chip, it is a platform name,” Huang said.

“The commercial product that could be sold was the GPU, and the software was intended to help people use the GPU in different ways,” Manuvir Das, Nvidia’s corporate vice chairman, said in an interview. “Of course we still do it. But what has really changed is that we actually now have a commercial software company.”

Das said Nvidia’s recent software will make it easier to run programs on any Nvidia GPU, even older ones, which may be higher fitted to deployment but not for constructing artificial intelligence.

“If you’re a developer and you have an interesting model that you want people to adopt, if you put it in NIM, we’ll make sure it runs on all our GPUs, so you reach a lot of people,” Das said.

Meet Blackwell, Hopper’s successor

Nvidia GB200 Grace Blackwell Superchip with two B200 GPUs and one ARM-based central processor.

Every two years, Nvidia updates its GPU architecture, unlocking a giant leap in performance. Many AI models released over the past year have been trained on the company’s Hopper architecture — utilized in chips like the H100 — which was announced in 2022.

Nvidia says Blackwell-based processors like the GB200 offer huge performance gains for AI firms, with 20 petaflops of AI performance in comparison with 4 petaflops for the H100. Nvidia says the additional processing power will enable artificial intelligence firms to coach larger and more complex models.

The chip comprises what Nvidia calls a “transformer engine built specifically to support transformer-based AI, one of the core technologies underlying ChatGPT.

The Blackwell GPU is large and combines two separately manufactured chips into a single chip produced by the company. It will also be available as an entire server called GB200 NVLink 2, featuring 72 Blackwell GPUs and other Nvidia parts dedicated to training AI models.

Nvidia CEO Jensen Huang compares the size of the new “Blackwell” chip to the current “Hopper” H100 chip during the company’s developer conference in San Jose, California.

Nvidia

, , , and will sell access to GB200 via cloud services. The GB200 combines two B200 Blackwell GPUs with one ARM-based Grace processor. Nvidia said Amazon Web Services would build a server cluster with 20,000 GB200 chips.

Nvidia said the system could implement a model with 27 trillion parameters. This is significantly larger than even the largest models such as GPT-4, which reportedly has 1.7 trillion parameters. Many AI researchers believe in larger models with more parameters and data could unlock new possibilities.

Nvidia did not provide costs for the new GB200 or the systems in which it is used. Nvidia’s Hopper-based H100 costs between $25,000 and $40,000 per chip, with entire systems costing as much as $200,000, according to analyst estimates.

Nvidia will also sell the B200 GPUs as part of a complete rack-wide system.

Nvidia Inference Microservice

Nvidia also announced that it’s adding a brand new product called NIM, which stands for Nvidia Inference Microservice, to its Nvidia enterprise software subscription.

NIM makes it easier to make use of older Nvidia GPUs for the inference or execution technique of AI software and allows firms to proceed using the lots of of hundreds of thousands of Nvidia GPUs they have already got. Inference requires less computational power than pre-training a brand new AI model. NIM enables firms that need to run their very own AI models, somewhat than purchasing access to AI results as a service from firms like OpenAI.

The strategy is for patrons purchasing Nvidia-based servers to join Nvidia Enterprise, which costs $4,500 per GPU per year to license.

Nvidia will work with artificial intelligence firms comparable to Microsoft and Hugging Face to be certain that their AI models are tuned to run on all compatible Nvidia chips. Then, using NIM, developers can efficiently run the model on their very own servers or Nvidia cloud servers, and not using a lengthy setup process.

“In my code where I was calling OpenAI, I will replace one line of code to point to the NIM that I got from Nvidia,” Das said.

Nvidia says the software will also help AI run on GPU-equipped laptops as an alternative of cloud servers.

Rome
Romehttps://globalcmd.com/
Rome Founder & Visionary Leader of the GlobalCommand Project: GlobalCmd.com, GLCND.com, and GlobalCmd A.I. Rome is the driving force behind the GlobalCommand Project, a revolutionary ecosystem designed to empower individuals, freelancers, entrepreneurs, business professionals, and small business owners with tools that simplify decision-making and maximize efficiency. At the center of this vision is GlobalCmd.com—a platform built to help users take control, solve problems, and achieve success faster and smarter. GlobalCmd.com: Smarter Solutions, Simplified GlobalCmd.com is a streamlined platform designed to replace complexity with clarity. Using intelligent AI Forms, the platform enables users to tackle challenges, optimize workflows, and reach actionable solutions in just a few steps. These forms guide users through simplified inputs to deliver clear, focused, and practical outputs—whether for crafting a business strategy, solving operational problems, or scaling their ideas. Unlike traditional chat-based AI systems, GlobalCmd.com focuses on results, not conversations. With an intuitive and targeted design, it helps users make decisions quickly, stay efficient, and focus on what matters most: growing their business and achieving their goals. How GLCND.com and GlobalCmd A.I. Support You GlobalCmd.com is powered by a larger ecosystem that enhances its capabilities and makes it the ultimate tool for users at every level: • GLCND.com: A content hub delivering practical insights and actionable advice on Business, Science, Health, and more. The curated knowledge from GLCND.com enriches GlobalCmd.com’s outputs, ensuring users are equipped with the latest trends and ideas to stay ahead. • GlobalCmd A.I.: The powerhouse behind GlobalCmd.com, built on the proprietary RAD² Framework (Research, Analysis, and Development with Rapid Asset Deployment). It transforms user inputs into powerful, tailored solutions, including predictive insights, scenario modeling, and personalized recommendations—all designed to make complex decisions effortless. Why GlobalCmd.com is Built for You Whether you’re a freelancer managing projects, an entrepreneur launching a new idea, a business owner optimizing your operations, or a professional solving daily challenges, GlobalCmd.com is designed to meet your needs. It simplifies the decision-making process, saves you time, and delivers clear, actionable results to help you move forward with confidence. GlobalCmd.com eliminates the guesswork, enabling you to focus on your goals, grow your business, and create lasting impact. It’s built for individuals and small teams who demand efficiency, value simplicity, and need powerful solutions without unnecessary complexity. Take Control Today Rome’s vision for the GlobalCommand Project is simple: to empower users to achieve their goals with the right tools and the right solutions—fast. With GlobalCmd.com at the center, supported by GLCND.com and GlobalCmd A.I., this ecosystem offers everything you need to make smarter decisions, streamline your work, and maximize your potential. Visit GlobalCmd.com today and take control of your success with tools designed to simplify, accelerate, and transform the way you work.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Advertisement

Popular

More like this
Related