AMD Unveils New MI300X AI Chip To Compete With Nvidia’s Dominance

Applications of AI


  • AMD announced on Tuesday that its MI300X, the most advanced GPU for its upcoming artificial intelligence, will start shipping to select customers later this year.
  • AMD’s announcement on Tuesday represents its strongest challenge to Nvidia, which currently dominates the AI ​​chip market.

Lisa Su displays the ADM Instinct M1300 chip during her keynote at CES 2023 at The Venetian Las Vegas, Las Vegas, Nevada on January 4, 2023.

David Becker | Getty Images

AMD announced on Tuesday that its most advanced GPU for artificial intelligence, the MI300X, will start shipping to select customers later this year.

Analysts said AMD’s announcement would be the biggest challenge to Nvidia, which currently dominates the AI ​​chip market with over 80% share.

GPUs are the chips that companies like OpenAI use to build cutting-edge AI programs like ChatGPT.

If the AI ​​chips, which AMD calls “accelerators,” are accepted by developers and server makers as alternatives to Nvidia’s offerings, they could represent a large untapped market for chip makers best known for traditional computer processors. .

AMD CEO Lisa Su told investors and analysts in San Francisco on Tuesday that AI is the company’s “biggest and most strategic long-term growth opportunity.”

“Think about data center AI accelerators” [market] It will grow from around $30 billion this year to more than $150 billion in 2027 at a compound annual growth rate of more than 50%,” Su said.

AMD hasn’t disclosed pricing, but the move could put more than $30,000 in price pressure on Nvidia GPUs such as the H100. Declining GPU prices could reduce the high cost of delivering generative AI applications.

While AI chips are one of the semiconductor industry’s bright spots, sales of PCs, which have traditionally driven semiconductor processor sales, are sluggish.

Last month, AMD CEO Lisa Su said at an earnings call that the MI300X will start sampling this fall, but more will start shipping next year. Su shared details about the chip in a presentation on Tuesday.

“I love this chip,” said Sue.

AMD said its new MI300X chip and CDNA architecture were designed for large-scale language models and other cutting-edge AI models.

“At the heart of it is the GPU, which enables generative AI,” Su said.

MI300X can use up to 192GB of memory, so it can fit even larger AI models than other chips. For example, Nvidia’s rival H100 only supports his 120GB of memory.

Large language models in generative AI applications use large amounts of memory due to the increased number of computations to perform. AMD demonstrated the MI300x running a 40 billion parameter model called Falcon. His GPT-3 model in OpenAI has 175 billion parameters.

“Model sizes are getting much larger, and you really need multiple GPUs to run modern large language models,” Su said, adding the memory of AMD chips would help developers so much. You pointed out that you don’t need a lot of GPUs.

AMD also said it will offer the Infinity architecture, which combines eight M1300X accelerators into one system. Nvidia and Google have developed similar systems that combine eight or more GPUs in a single box, he said, for AI applications.

One of the reasons AI developers have historically favored Nvidia chips is because they have a well-developed software package called CUDA that gives access to the chips’ core hardware features.

AMD announced Tuesday that it has its own software for its AI chips called ROCm.

“It’s been a journey, but we’ve made tremendous progress in building a powerful software stack that works with an open ecosystem of models, libraries, frameworks and tools,” said AMD President Victor Peng. .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *