Home / News / World news / AMD Surpasses 30x25 Goal, Sets Ambitious New 20x Efficiency Target

World news

AMD Surpasses 30x25 Goal, Sets Ambitious New 20x Efficiency Target

3BL | Fri, Jun 13 2025 08:00 AM AEST

stock

Image Source:Kalkine Media

At a Glance:

  • AMD has exceeded its 30x25 goal, achieving a 38x increase in node-level energy efficiency for AI-training and HPC, which equates to a 97% reduction in energy for the same performance compared to systems from just five years ago.
  • AMD has set a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in under one rack by 2030, using 95% less electricity.
  • Combined with software and algorithmic advances, the new goal could enable up to a 100x improvement in overall energy efficiency

At AMD, energy efficiency has long been a guiding core design principle aligned to our roadmap and product strategy. For more than a decade, we’ve set public, time-bound goals to dramatically increase the energy efficiency of our products and have consistently met and exceeded those targets. Today, I’m proud to share that we’ve done it again, and we’re setting the next five-year vision for energy efficient design.

Today at Advancing AI, we announced that AMD has surpassed our 30x25 goal, which we set in 2021 to improve the energy efficiency of AI-training and high-performance computing (HPC) nodes by 30x from 2020 to 2025.1 This was an ambitious goal, and we’re proud to have exceeded it, but we’re not stopping here.

As AI continues to scale, and as we move toward true end-to-end design of full AI systems, it’s more important than ever for us to continue our leadership in energy- efficient design work. That’s why today, we’re also setting our sights on a bold new target: a 20x improvement in rack-scale energy efficiency for AI training and inference by 2030, from a 2024 base year.2

Building on a Decade of Leadership 

This marks the third major milestone in a multi-decade effort to advance efficiency across our computing platforms. In 2020, we exceeded our 25x20 goal by improving the energy efficiency of AMD mobile processors 25-fold in just six years.3 The 30x25 goal built on that momentum, targeting AI and HPC workloads in accelerated nodes. And now, the 20x by 2030 rack-scale goal reflects the next frontier, not just focused on chips, but smarter and more efficient systems, from silicon to full rack integration to address data center level power requirements.

Surpassing 30x25

Our 30x25 goal was rooted in a clear benchmark, to improve the energy efficiency of our accelerated compute nodes by 30x compared to a 2020 base year. This goal represented more than a 2.5x acceleration over industry trends from the previous five years (2015-2020). As of mid-2025, we’ve gone beyond that, achieving a 38x gain over the base system using a current configuration of four AMD Instinct™ MI355X GPUs and one AMD EPYC™ 5th Gen CPU.4 That equates to a 97% reduction in energy for the same performance compared to systems from just five years ago.

We achieved this through deep architectural innovations, aggressive optimization of performance-per-watt, and relentless engineering across our CPU and GPU product lines.

A New Goal for the AI Era

As workloads scale and demand continues to rise, node-level efficiency gains won't keep pace. The most significant efficiency impact can be realized at the system level, where our 2030 goal is focused.

We believe we can achieve 20x increase in rack-scale energy efficiency for AI training and inference from 2024 by 2030, which AMD estimates exceeds the industry improvement trend from 2018 to 2025 by almost 3x. This reflects performance-per-watt improvements across the entire rack, including CPUs, GPUs, memory, networking, storage and hardware-software co-design, based on our latest designs and roadmap projections. This shift from node to rack is made possible by our rapidly evolving end-to-end AI strategy and is key to scaling datacenter AI in a more sustainable way.

What This Means in Practice

A 20x rack-scale efficiency improvement at nearly 3x the prior industry rate has major implications. Using training for a typical AI model in 2025 as a benchmark, the gains could enable:5

  • Rack consolidation from more than 275 racks to <1 fully utilized rack
  • More than a 95% reduction in operational electricity use
  • Carbon emission reduction from approximately 3,000 to 100 metric tCO2 for model training

These projections are based on AMD silicon and system design roadmap and a measurement methodology validated by energy-efficiency expert Dr. Jonathan Koomey.

“By grounding the 2030 target in system-level metrics and transparent methodology, AMD is raising the bar for the industry,” Dr. Koomey said. “The target gains in rack-scale efficiency will enable others across the ecosystem, from model developers to cloud providers, to scale AI compute more sustainably and cost-effectively.”

Looking Beyond Hardware

Our 20x goal reflects what we control directly: hardware and system-level design. But we know that even greater delivered AI model efficiency gains will be possible, of up to 5x over the goal period, as software developers discover smarter algorithms and continue innovating with lower-precision approaches at current rates. When those factors are included, overall energy efficiency for training a typical AI model could improve by as much as 100x by 2030.6

While AMD is not claiming that full multiplier in our own goal, we’re proud to provide the hardware foundation that enables it — and to support the open ecosystem and developer community working to unlock those gains. Whether through open standards, our open software approach with AMD ROCm™, or our close collaboration with our partners, AMD remains committed to helping innovators everywhere scale AI more efficiently.

What Comes Next

As we close one chapter with 30x25 and open the next with this new rack-scale goal, we remain committed to transparency, accountability, and measurable progress. This approach sets AMD apart and is necessary as we advance how the industry approaches efficiency as demand and deployment of AI continues to expand.

We're excited to keep pushing the limits, not just of performance, but also what’s possible when efficiency leads the way. As the goal progresses, we will continue to share updates on our progress and the effects these gains are enabling across the ecosystem.

Footnotes

More For You

World News

A Shared Bond, a Love for the Game

3BL | Sat, Jun 14 2025 03:00 AM AEST

stock

By Doug Segrest | May 16, 2025Two days ago, Lulu Gribbin was a partici...

World News

Tennessee Recognizes Knoxville Terminal With Top Safety Award

3BL | Sat, Jun 14 2025 02:00 AM AEST

stock

Key pointsMarathon Petroleum’s Knoxville asphalt terminal earned its...

World News

The Roger Effect: How One Refugee Father Transformed His Family and Community

3BL | Sat, Jun 14 2025 12:45 AM AEST

stock

Posted by Action Against Hunger.By Diana Sharone Tumuhairwe.When Roger...