Intel has used its 2021 International Supercomputing Conference (ISC) appearance to showcase its high-performance computing (HPC) portfolio, which has received a few tweaks.
“The Holy Grail of HPC is to have a balanced system where you don’t run into roadblocks,” Trish Damkroger, vice president and general manager of high-performance computing at Intel, told media.
“That’s why at Intel we’ve really looked at building a wide portfolio of HPC ingredients so you can pick and choose and have more of a balanced system approach.”
The first is the announcement that the next generation of Intel Xeon Scalable processors, code-named Sapphire Rapids, will offer integrated high bandwidth memory (HBM), providing what Damkroger said was a dramatic boost in memory bandwidth and a significant performance improvement for HPC applications that operate memory bandwidth-sensitive workloads.
Users can power through workloads using just high bandwidth memory or in combination with DDR5, she said.
The Sapphire Rapids-based platform is touted as accelerating HPC, including through its increased I/O bandwidth with PCI express 5.0, compared to PCI express 4.0, and Compute Express Link 1.1 support that Intel said enables advanced use cases across compute, networking, and storage. Sapphire Rapids is optimised for HPC and AI workloads, with a new built-in AI acceleration engine called Intel Advanced Matrix Extensions.
“We’ll be coming out with more specific details for Sapphire Rapids,” she said.
See also: Intel CEO Gelsinger commits to being a ‘world-class foundry business’
At ISC 2021, Intel also announced its new high-performance networking with Ethernet solution, which extends Ethernet technology capabilities for smaller clusters in the HPC segment by using standard Intel Ethernet 800 series network adaptors and controllers, switches based on Intel Tofino P4-programmable Ethernet switch ASICs, and the Intel Ethernet fabric suite software.
Intel is also introducing commercial support for distributed application object storage (DAOS), which is an open-source software-defined object store built to optimise data exchange across Intel HPC architectures.
“To maximise HPC performance we must leverage all the computer resources and technology advancements available to us,” Damkroger said. “Intel is the driving force behind the industry’s move toward exascale computing, and the advancements we’re delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realisation.”
Dell Technologies on Monday also announced it was introducing new solutions to help customers better manage the convergence of HPC, AI, and data analytics through Omnia and that it was additionally expanding Dell EMC PowerEdge server accelerator support.
Omnia was developed at the Dell Technologies HPC & AI Innovation Lab, in collaboration with Intel and with support from the HPC community, it said.
The open-source software is designed to automate the provisioning and management of HPC, AI, and data analytics workloads to create a “single pool of flexible resources to meet growing and diverse demands”.
“The Omnia software stack is an open source set of Ansible playbooks that speed the deployment of converged workloads with Kubernetes and Slurm, along with library frameworks, services, and applications,” the company explained.
“Omnia automatically imprints a software solution onto each server based on the use case — for example, HPC simulations, neural networks for AI, or in‑memory graphics processing for data analytics — to reduce deployment time from weeks to minutes.”
Dell Technologies now offers Nvidia A30 and A10 Tensor Core GPUs as options for Dell EMC PowerEdge R750, R750xa, and R7525 servers.