CTuning foundation

New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years

Retrieved on: 
Wednesday, November 8, 2023

Today, MLCommons® announced new results from two industry-standard MLPerf™ benchmark suites:

Key Points: 
  • Today, MLCommons® announced new results from two industry-standard MLPerf™ benchmark suites:
    The MLPerf Training v3.1 suite, which measures the performance of training machine learning models.
  • The MLPerf HPC (High Performance Computing) v.3.0 benchmark suite, which is targeted at supercomputers and measures the performance of training machine learning models for scientific applications and data.
  • The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.
  • To view the results for MLPerf Training v3.1 and MLPerf HPC v3.0 and find additional information about the benchmarks, please visit the Training and HPC benchmark pages.

MLCommons Releases New MLPerf Results that Highlight Growing Importance of Generative AI and Storage

Retrieved on: 
Monday, September 11, 2023

This publication marks the first ever release of results from the MLPerf Storage benchmark, which measures the performance of storage systems in the context of ML training workloads.

Key Points: 
  • This publication marks the first ever release of results from the MLPerf Storage benchmark, which measures the performance of storage systems in the context of ML training workloads.
  • In particular, MLCommons® would like to congratulate first time MLPerf Inference submitters Connect Tech, Nutanix, Oracle, and TTA.
  • The MLPerf Storage Benchmark Suite is the first open-source AI/ML benchmark suite that measures the performance of storage for ML training workloads.
  • To view the results for MLPerf Inference v3.1 and MLPerf Storage v0.5, and to find additional information about the benchmarks please visit:

MLPerf Results Show Rapid AI Performance Gains

Retrieved on: 
Tuesday, June 27, 2023

The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.

Key Points: 
  • The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.
  • The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.
  • The first is a large language model (LLM) using the GPT-3 reference model that reflects the rapid adoption of generative AI.
  • “And the combined effect of software and hardware performance improvements are 1000-fold in some areas compared to our initial reference benchmark results, which shows the pace that innovation is happening in the field.”
    To view the results for MLPerf Training v3.0 and MLPerf Tiny v1.1, and to find additional information about the benchmarks please visit:

MLPerf Inference Delivers Power Efficiency and Performance Gains

Retrieved on: 
Wednesday, April 5, 2023

The latest benchmark results illustrate the industry’s emphasis on power efficiency, with 50% more power efficiency results, and significant gains in performance by over 60% in some benchmark tests.

Key Points: 
  • The latest benchmark results illustrate the industry’s emphasis on power efficiency, with 50% more power efficiency results, and significant gains in performance by over 60% in some benchmark tests.
  • Improving performance and power efficiency will lead the way for deploying more capable AI systems that benefit society.
  • The MLPerf benchmark suites are comprehensive system tests that stress machine learning models including the underlying software and hardware, and in some cases, optionally measuring power efficiency.
  • This round featured even greater participation across the community with a record-breaking 25 submitting organizations, over 6,700 performance results, and more than 2,400 performance and power efficiency measurements.