July 7, 2023
news

MLCommons has announced new results from two industry-standard MLPerf™ benchmark suites. Train v3.0 measures the performance of training machine learning models. Tiny v1.1 measures how fast a trained neural network can process new data at low cost. Small form factor power device.
Benchmarks show improvements in training advanced neural networks and deploying AI models at the edge. His latest MLPerf training rounds have also demonstrated broad industry participation, showing performance improvements of up to 1.54x and 33-49x compared to the first round.
The open-source, peer-reviewed MLPerf Training benchmark suite also features full system tests focused on machine learning models, software, and hardware for a wide range of applications.
In this round, MLPerf Training added two new benchmarks to their suite. The first is the Large Language Model (LLM) using the GPT-3 reference model, which reflects the adoption of generative AI. The second is an updated recommender that uses the DLRM-DCNv2 reference model and is modified to be more representative of industry practice. The new tests are designed to help advance AI by ensuring industry-standard benchmarks represent the latest adoption trends and guide customers, vendors, researchers, and others.
According to the company, the MLPerf Training v3.0 round includes over 250 performance results from 16 different submitters: ASUSTek, Azure, Dell, Fujitsu, GIGABYTE, H3C, IEI, Intel & Habana Labs. , a 62% increase over the previous round. , Krai, Lenovo, NVIDIA, NVIDIA + CoreWeave, Quanta Cloud Technology, Supermicro, and xFusion. In particular, MLCommons would like to congratulate his CoreWeave, IEI and Quanta Cloud Technology for submitting his MLPerf Training for the first time.
The MLPerf Tiny benchmark suite captures a variety of inference use cases, including “small” neural networks (typically 100 KB or less) processing sensor data such as audio and vision, and delivers endpoint intelligence to small form factor, low power devices. provide. MLPerf Tiny tests these features in addition to providing optional power measurements.
In this round, the Tiny ML v1.1 benchmark includes 10 submissions from academic institutions, industry associations, and national research institutes, with 159 peer-reviewed results. Submitters include Bosch, cTuning, fpgaConvNet, Kai Jiang, Krai, Nuvoton, Plumerai, Skymizer, STMicroelectronics, and Syntiant. This round also includes 41 power measurements. MLCommons congratulates Bosch, cTuning, fpgaConvNet, Kai Jiang, Krai, Nuvoton, and Skymizer on their initial contributions to MLPerf Tiny.
“The adoption of the benchmark suite by so many new companies has really expanded the range of hardware solutions and innovative software frameworks covered. It includes proposals ranging from small to large FPGAs, showing a wide variety of design options,” said Dr. Csaba Kiraly, co-chair of the MLPerf Tiny Working Group. “Also, the combined effect of software and hardware performance improvements compared to the initial reference benchmark results, he was 1000 times in some areas, indicating the pace at which innovation is happening in this area. .”
To view MLPerf Training v3.0 and MLPerf Tiny v1.1 results and find additional information about benchmarks, visit Training v3.0 and Tiny v1.1.
