Qualcomm Inc’s artificial intelligence chip beat Nvidia Corp in two of its three measures of power efficiency in a new set of test data released Wednesday.
Nvidia dominates the market for training AI models with massive amounts of data. However, once these AI models are trained, they are widely used in so-called “inference” by performing tasks such as generating text responses to prompts or determining whether an image contains a cat. will be used.
Analysts believe the market for data center inference chips will grow rapidly as companies incorporate AI technology into their products, but companies such as Alphabet Inc’s Google see the added redundancy in doing so. We are already looking for ways to keep costs down.
One of these major costs is power. Qualcomm used its history of designing chips for battery-powered devices such as smartphones to create a chip called Cloud AI 100 aimed at saving power.
Qualcomm’s AI 100 outperforms Nvidia’s flagship H100 chips, with each chip outperforming Nvidia’s flagship H100 chips, in test data released Wednesday by MLCommons, an engineering consortium that maintains widely used test benchmarks in the AI chip industry. It beat out in image classification based on the number of data center server queries it can perform per hour. Watt.
Qualcomm’s chip delivered 227.4 server queries per watt versus Nvidia’s 108.4 queries.
Qualcomm also beat Nvidia in object detection with a score of 3.8 queries per watt, while Nvidia had 2.4 queries per watt. Object detection can be used in applications such as analyzing video in a retail store to see where shoppers go most.
However, Nvidia took the top spot for both absolute performance and power efficiency in a test of natural language processing, the AI technology most widely used in systems like chatbots. Nvidia scored 10.8 Queries/Watt and Qualcomm ranked second with 8.9 Queries/Watt.
(Reporting by Stephen Nellis in San Francisco; Editing by Paul Simao)