Home

HPE adds Blackwell, Rubin systems to Nvidia-backed sovereign AI push

HPE has expanded its Nvidia-based AI portfolio with new systems built on Blackwell and upcoming Rubin GPUs, alongside updates to its Alletra Storage MP X10000, which it claims is the first object storage platform to achieve Nvidia-Certified Storage validation.

The company is also announcing new Nvidia-powered AI Factory and Supercomputing ranges, which include AI grids and enable so-called sovereign AI in Europe and the US.

HPE president and CEO Antonio Neri said: "The AI race is fundamentally about speed, scale, and trust. Our industry leadership across cloud, networking, and AI enables organizations to operationalize AI securely, efficiently, and at an unprecedented scale. Together with Nvidia, HPE delivers turnkey AI factories and networks that transform AI ambitions into real enterprise value."

HPE noted that it is building and installing the supercomputer for the European Union AI Factory, HammerHAI (Hybrid and Advanced Machine learning platform for Manufacturing, Engineering, and Research). A consortium of leading academic HPC centers in Germany will lead this effort. It said its work would allow orgs to "scale AI initiatives" while "adhering to regional data sovereignty and compliance requirements."

Dr Bastian Koller, Managing Director of the High Performance Computing Center at Stuttgart University and lead coordinator of HammerHAI said of the partnership: "HammerHAI will offer a highly-performant AI platform, alongside services like AI skills training, as an alternative to future users that have historically relied on commercial cloud AI services in which data sovereignty was difficult to ensure."

Dr Koller added: "This integrated approach will help researchers, startups, and enterprises access AI resources while operating in alignment with European Union data security requirements."

HPE says it's the first vendor to achieve Nvidia-Certified Storage validation for object-based systems at the Foundation level with the Alletra Storage MP X10000. Nvidia has validated and benchmarked the array's performance for workloads of up to 128 GPUs, conducted functional tests for enterprise-grade availability and reliability, and confirms that the storage layer efficiently feeds data to accelerated computing resources to deliver faster model training, lower latency inference, and better overall utilization.

An HPE blog discusses how the Alletra Storage MP X10000 and Nvidia RDMA for S3 accelerate AI pipelines, RAG, and real-time inference with low-latency, high-throughput, GPU-direct data paths. It contains this diagram illustrating the X10000's support for AI pipeline stages:

HPE AI pipeline digram showing Alletra Storage MP X10000's role - Click to enlarge

The X10000 is central to HPE's storage activities in AI. It has indexed terabyte-scale vector data in under an hour with the X10000, Nvidia cuVS CAGRA GPU-accelerated vector indexing, and cuObject for accelerated storage I/O. Another blog explains how it did this, showing a 17x improvement in index build time and 8x improvement in total end-to-end pipeline transport using a single Nvidia H100 and accelerated remote direct memory access (RDMA). 

HPE says that it's evolving the X10000 to centralize intelligent data handling and optimize how AI workloads ingest, process, and deliver data. The company will be supporting the new Nvidia STX rackscale reference architecture to develop new AI storage offerings powered by Vera Rubin accelerators, BlueField-4 DPUs, Spectrum-X networking, ConnectX NICs, and Nvidia's AI software.

At the supercomputing event, HPE also announced a range of additional enterprise and edge-focused products:

Nvidia, to deliver greater performance, scalability, and flexibility for enterprise inference.

The multi-workload offerings combine ProLiant Compute servers with Nvidia accelerated computing, Spectrum-X Ethernet networking, BlueField DPUs, and Connect-X NICs, and also incorporate Nvidia's software, CUDA-X libraries, blueprints, confidential computing, Multi‑Instance GPU (MIG), and virtual GPU (vGPU) technologies with HPE chip‑to‑cloud security and AI‑driven automation through HPE Compute Ops Management. 

HPE is also introducing an AI Grid, an end-to-end offering built on an Nvidia reference architecture to connect AI factories and distributed inference clusters across regional and far‑edge sites. It enables service providers to deploy and operate thousands of distributed inference sites, turning AI installations into a single intelligent system.

The HPE AI Grid includes: 

HPE says AI Grid lets service providers convert existing sites with power and connectivity into RAN‑ready AI grids, enabling distributed inference and new services at scale.

HPE support for RTX PRO 4500 Blackwell Server Edition GPUs across the ProLiant Compute server portfolio will roll out in Q1 and Q2 2026.

HPE Private Cloud AI with air-gapped deployment, support for RTX PRO 6000 Blackwell Server Edition GPUs across each configuration, and AI-Q and Omniverse blueprints is available now. 

The new network expansion racks for HPE Private Cloud AI for scaling up to 128 GPUs will be available in July. 

The HPE and Protopia secure blueprint for trustworthy AI factories is planned for Q2 2026.

Fortanix support with ProLiant DL380a Gen12 systems is planned for Q3 2026. ®

Nvidia's cuVS is a library for vector search and clustering on the GPU. CAGRA is a graph-based nearest neighbor algorithm that was built from the ground up for GPU acceleration. CAGRA demonstrates state-of-the-art index build and query performance for both small- and large-batch sized search. ®

Source: The register

Previous

Next