Nvidia H100 continues to dominate machine learning benchmarks

Machine Learning


Readers can help support MSpoweruser – if you buy through links on our site we may earn a commission.
Tooltip IconTooltip Icon

To learn how you can help sustain the MSPoweruser editorial team, please see our disclosure page. Read more

Nvidia has long dominated the AI ​​chip market, and that's not entirely unfounded: the tech giant's H100 system is the current market leader, and for now, there are no dominant competitors.

MLPerf, one of the most popular benchmarks (if not the most accurate) used to measure the performance of AI chips, has launched a new set of tests created to fine-tune large-scale language models (LLMs) and graph neural networks (GNNs), and according to these tests, Nvidia's H100 system is setting records.

With 11,616 H100 GPUs, it was the largest system ever tested in the MLPerf benchmarks, achieving top performance in all nine benchmarks, setting records in five of them, as detailed in the report.

Competitors like Google and Intel also entered the fray with AI accelerators, but were outperformed by Nvidia. Google's TPU system showed significant speedups, and Intel's GPUs showed impressive progress, but neither could match the performance of Nvidia's largest system, which contained 11,616 H100 GPUs.

Additionally, Nvidia also confirmed that several software optimizations have improved GPT-3 training time by 27% from the June 2023 benchmark, including better use of 8-bit floating-point arithmetic, more efficient power management for the compute engine, and improved communication between the GPUs.

We also implemented flash attention, an algorithm that speeds up Transformer networks by minimizing memory writes, helping to reduce training time by 10%.


Rafly is a reporter with years of journalistic experience covering technology, business, society, culture, etc. He currently reports on Microsoft related product, technology, and AI news for Windows Report and MSPowerUser. Got a tip? Submit it! [email protected].




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *