Advertisment

AMD EPYC Processors and AMD Instinct MI100 Accelerator Launched

author-image
PCQ Bureau
New Update
AMD Instinct MI100 Accelerator, AMD EPYC Processors

AMD has launched the new AMD Instinct MI100 accelerator with ROCm 4.0 open ecosystem support and showcased a growing list of AMD EPYC CPU and AMD Instinct accelerator based deployments, and highlighted its collaboration with Microsoft Azure for HPC in the cloud. AMD also remains on track to begin volume shipments of the 3rd Gen EPYC processors with “Zen 3” core to select HPC and cloud customers this quarter in advance of the expected public launch in Q1 2021, aligned with OEM availability.

Advertisment

The new AMD Instinct MI100 accelerator, is the world’s fastest HPC GPU accelerator for scientific workloads and the first to surpass the 10 teraflops (FP64) performance barrier. Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. Supported by new accelerated compute platforms from Dell, HPE, Gigabyte, and Supermicro, the MI100, combined with AMD EPYC CPUs and ROCm 4.0 software, is designed to propel new discoveries ahead of the exascale era.

“No two customers are the same in HPC, and AMD is providing a path to today’s most advanced technologies and capabilities that are critical to support their HPC work, from small clusters on premise, to virtual machines in the cloud, all the way to exascale supercomputers,” said Forrest Norrod, senior vice president and general manager, Data Center and Embedded Solutions Business Group, AMD. “Combining AMD EPYC processors and Instinct accelerators with critical application software and development tools enables AMD to deliver leadership performance for HPC workloads.”

Advertisment

Also Read: AMD Ryzen Embedded V2000 Processors Arrive into the World

AMD and Microsoft Azure Power HPC In the Cloud

Azure is using 2nd Gen AMD EPYC processors to power its HBv2 virtual machines (VMs) for HPC workloads. These VMs offer up to 2x the performance of first-generation HB-series virtual machines, can support up to 80,000 cores for MPI jobs, and take advantage of 2nd Gen AMD EPYC processors’ up to 45% more memory bandwidth than comparable x86 alternatives.

Advertisment

HBv2 VMs are used by numerous customers including The University of Illinois at Urbana-Champaign’s Beckman Institute for Advanced Science & Technology which used 86,400 cores to model a plant virus that previously required a leadership class supercomputer and the U.S. Navy which rapidly deploys and scales enhanced weather and ocean pattern predictions on demand. HBv2 powered by 2nd Gen AMD EPYC processors also provides the bulk of the CPU compute power for the OpenAI environment Microsoft announced earlier this year.

AMD EPYC processors have also helped HBv2 reach new cloud HPC milestones, such as a new record for Cloud MPI scaling results with NAMD, Top 20 results on the Graph500, and the first 1 terabyte/sec cloud HPC parallel filesystem. Across these and other application benchmarks, HBv2 is delivering 12x higher scaling than found elsewhere on the public cloud.

Adding on to its existing HBv2 HPC virtual machine powered by 2nd Gen AMD EPYC processors, Azure announced it will utilize next generation AMD EPYC processors, codenamed ‘Milan’, for future HB-series VM products for HPC.

amd intel amd-ryzen supercomputers amd-epyc-processors
Advertisment