US chip maker Qualcomm, US-based machine learning (ML)-focused chip manufacturing startup SiMa, and Taiwan-based Neuchips are all high-performance enterprise It beats market leader Nvidia on numerous power efficiency scores for its chips. A global technology company that trains platforms and artificial intelligence (AI). While SiMa chips achieved faster processing latency in edge computing, Neuchips led the total amount of images processed per watt of power consumed (image classification done using neural networks).
The power efficiency score was released Wednesday by engineering consortium MLCommons’ latest benchmark figures for the March quarter. It simulates the performance of an enterprise chip, based on a language model similar to what powers the popular OpenAI-powered ChatGPT.
Nvidia still ranks highest in terms of outright performance for enterprise graphics processing unit (GPU) chips targeting ML, but Qualcomm’s own AI Cloud 100 (QAIC100) AI processing chip, the Nvidia H11 produced more efficient object detection performance on neural network tasks. — Solving 3.2 queries per watt of power consumption compared to 2.7 queries per watt on the Nvidia H100.
The figure comes after a team of Google researchers released a research paper on Tuesday about the company’s custom Tensor Processing Unit (TPU) chip-powered supercomputers used to train large language models (LLMs). In a research paper, the researcher says the TPU v4 ML training his supercomputer is 10 times more powerful than his predecessor and nearly twice as power efficient as Nvidia’s A100. .
MLCommons used the language model Bert-Large as a reference point to test the latest AI chips from companies around the world. The latter is not an LLM. Because it uses fewer data points (340 million parameters) compared to industry-standard LLMs such as OpenAI’s Generative Pre-trained Transformer (GPT)-3.5. Indeed, GPT-3.5 used 175 billion data parameters, but his successor GPT-4, released last month, is rumored to have 3 trillion data points.
In an interview with EE Times, David Kanter, executive director of MLCommons, said the consortium will start considering LLMs when benchmarking AI training processors starting next quarter, and the same will be reflected in chip processing scores later this year. said it will.
Nvidia’s lead in terms of chip efficiency could be a big tipping point. As of last December, the enterprise GPU industry report by Jon Peddie Research put Nvidia at 88% of his market share. Meanwhile, companies highlight the enormous cost and energy consumption required to train OpenAI’s ChatGPT, Google’s Bard-like LLM, and hundreds of other formats and services, such as the image generation tool Midjourney. increase.
It is this problem that many are trying to solve. In an interview with EE Times, SiMa chief executive Krishna Langasaye, whose chip edge processing efficiency beats Nvidia’s H100, said the company is still working on his 16nm manufacturing node. , and there is plenty of room for further reductions in power consumption. To expand chip deployment.
Manufacturing node refers to the size of the transistors used within a semiconductor chip. The smaller the node size, the more power efficient the chip.