To hurry up AI adoption throughout industries, HPE and NVIDIA immediately launched new AI manufacturing unit choices at HPE Uncover in Las Vegas.
The brand new lineup contains every little thing from modular AI manufacturing unit infrastructure and HPE’s AI-ready RTX PRO Servers (HPE ProLiant Compute DL380a Gen12), to the subsequent era of HPE’s turnkey AI platform, HPE Non-public Cloud AI. The purpose: give enterprises a framework to construct and scale generative, agentic and industrial AI.
The NVIDIA AI Computing by HPE portfolio is now among the many broadest out there.
The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking applied sciences, NVIDIA AI Enterprise software program, and HPE’s full portfolio of servers, storage, providers and software program. This now contains HPE OpsRamp Software program, a validated observability answer for the NVIDIA Enterprise AI Manufacturing unit, and HPE Morpheus Enterprise Software program for orchestration. The result’s a pre-integrated, modular infrastructure stack to assist groups get AI into manufacturing sooner.
This contains the next-generation HPE Non-public Cloud AI, co-engineered with NVIDIA and validated as a part of the NVIDIA Enterprise AI Manufacturing unit framework. This full-stack, turnkey AI manufacturing unit answer will provide HPE ProLiant Compute DL380a Gen12 servers with the brand new NVIDIA RTX PRO 6000 Blackwell Server Version GPUs.
These new NVIDIA RTX PRO Servers from HPE present a common information heart platform for a variety of enterprise AI and industrial AI use instances, and are actually accessible to order from HPE. HPE Non-public Cloud AI contains the most recent NVIDIA AI Blueprints, together with the NVIDIA AI-Q Blueprint for AI agent creation and workflows.
HPE additionally introduced a brand new NVIDIA HGX B300 system, the HPE Compute XD690, constructed with NVIDIA Blackwell Extremely GPUs. It’s the most recent entry within the NVIDIA AI Computing by HPE lineup and is anticipated to ship in October.
In Japan, KDDI is working with HPE to construct NVIDIA AI infrastructure to speed up world adoption.
The HPE-built KDDI system will probably be based mostly on the NVIDIA GB200 NVL72 platform, constructed on the NVIDIA Grace Blackwell structure, on the KDDI Osaka Sakai Information Heart.
To speed up AI for monetary providers, HPE will co-test agentic AI workflows constructed on Accenture’s AI Refinery with NVIDIA, working on HPE Non-public Cloud AI. Preliminary use instances embody sourcing, procurement and danger evaluation.
HPE mentioned it’s including 26 new companions to its “Unleash AI” ecosystem to assist extra NVIDIA AI use instances. The corporate now provides greater than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity.
Safety and governance have been a spotlight, too. HPE Non-public Cloud AI helps air-gapped administration, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets prospects check the system in Equinix information facilities earlier than buy. HPE additionally launched new packages, together with AI Acceleration Workshops with NVIDIA, to assist scale AI deployments.
- Watch the keynote: HPE CEO Antonio Neri introduced the information from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and replay.
- Discover extra: Learn the way NVIDIA and HPE construct AI factories for each business. Go to the companion web page.