25.6 C
New York
Thursday, July 4, 2024

Hewlett Packard Enterprise and NVIDIA Announce ‘NVIDIA AI Computing by HPE’ to Speed up Generative AI Industrial Revolution



New Lineup Options First-of-Its-Form Turnkey, Non-public-Cloud AI Answer Together with Sustainable Accelerated Computing with Full Lifecycle Providers to Streamline Time to Worth with AI

HPE Uncover 2024—Hewlett Packard Enterprise (NYSE: HPE) and NVIDIA as we speak introduced NVIDIA AI Computing by HPE, a portfolio of co-developed AI options and joint go-to-market integrations that allow enterprises to speed up adoption of generative AI.

Among the many portfolio’s key choices is HPE Non-public Cloud AI, a first-of-its-kind answer that gives the deepest integration to this point of NVIDIA AI computing, networking and software program with HPE’s AI storage, compute and the HPE GreenLake cloud. The providing permits enterprises of each dimension to achieve an energy-efficient, quick and versatile path for sustainably creating and deploying generative AI purposes. Powered by the brand new OpsRamp AI copilot that helps IT operations enhance workload and IT effectivity, HPE Non-public Cloud AI features a self-service cloud expertise with full lifecycle administration and is offered in 4 right-sized configurations to assist a broad vary of AI workloads and use instances.

All NVIDIA AI Computing by HPE choices and companies can be out there via a joint go-to-market technique that spans gross sales groups and channel companions, coaching and a worldwide community of system integrators — together with Deloitte, HCLTech, Infosys, TCS and Wipro — that may assist enterprises throughout a wide range of industries run complicated AI workloads.

Introduced throughout the HPE Uncover keynote by HPE President and CEO Antonio Neri, who was joined by NVIDIA founder and CEO Jensen Huang, NVIDIA AI Computing by HPE marks the growth of a decades-long partnership and displays the substantial dedication of time and assets from every firm.

“Generative AI holds immense potential for enterprise transformation, however the complexities of fragmented AI expertise comprise too many dangers and boundaries that hamper large-scale enterprise adoption and might jeopardize an organization’s most respected asset — its proprietary information,” stated Neri. “To unleash the immense potential of generative AI within the enterprise, HPE and NVIDIA co-developed a turnkey non-public cloud for AI that may allow enterprises to focus their assets on creating new AI use instances that may enhance productiveness and unlock new income streams.”

“Generative AI and accelerated computing are fueling a elementary transformation as each {industry} races to hitch the commercial revolution,” stated Huang. “By no means earlier than have NVIDIA and HPE built-in our applied sciences so deeply — combining the complete NVIDIA AI computing stack together with HPE’s non-public cloud expertise — to equip enterprise shoppers and AI professionals with essentially the most superior computing infrastructure and companies to increase the frontier of AI.”

HPE and NVIDIA co-developed Non-public Cloud AI portfolio

HPE Non-public Cloud AI delivers a singular, cloud-based expertise to speed up innovation and return on funding whereas managing enterprise danger from AI. The answer gives:

      Assist for inference, fine-tuning and RAG AI workloads that make the most of proprietary information.

      Enterprise management for information privateness, safety, transparency and governance necessities.

      Cloud expertise with ITOps and AIOps capabilities to extend productiveness.

      Quick path to eat flexibly to fulfill future AI alternatives and development.

Curated AI and information software program stack in HPE Non-public Cloud AI

The muse of the AI and information software program stack begins with the NVIDIA AI Enterprise software program platform, which incorporates NVIDIA NIM™ inference microservices.

NVIDIA AI Enterprise accelerates information science pipelines and streamlines improvement and deployment of production-grade copilots and different GenAI purposes. Included with NVIDIA AI Enterprise, NVIDIA NIM delivers easy-to-use microservices for optimized AI mannequin inferencing providing a easy transition from prototype to safe deployment of AI fashions in a wide range of use instances.

Complementing NVIDIA AI Enterprise and NVIDIA NIM, HPE AI Necessities software program delivers a able to run set of curated AI and information basis instruments with a unified management aircraft that present adaptable options, ongoing enterprise assist and trusted AI companies, reminiscent of information and mannequin compliance and extensible options that guarantee AI pipelines are in compliance, explainable and reproducible all through the AI lifecycle.

To ship optimum efficiency for the AI and information software program stack, HPE Non-public Cloud AI delivers a totally built-in AI infrastructure stack that features NVIDIA Spectrum-X™ Ethernet networking, HPE GreenLake for File Storage and HPE ProLiant servers with assist for NVIDIA L40SNVIDIA H100 NVL Tensor Core GPUs and the NVIDIA GH200 NVL2 platform.

Cloud expertise enabled by HPE GreenLake cloud

HPE Non-public Cloud AI gives a self-service cloud expertise enabled by HPE GreenLake cloud. By means of a single, platform-based management aircraft, HPE Greenlake cloud companies present manageability and observability to automate, orchestrate and handle endpoints, workloads and information throughout hybrid environments. This consists of sustainability metrics for workloads and endpoints.

HPE GreenLake cloud and OpsRamp AI infrastructure observability and copilot assistant

OpsRamp’s IT operations are built-in with HPE GreenLake cloud to ship observability and AIOps to all HPE services and products. OpsRamp now gives observability for the end-to-end NVIDIA accelerated computing stack, together with NVIDIA NIM and AI software program, NVIDIA Tensor Core GPUs and AI clusters in addition to NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet switches. IT directors can acquire insights to establish anomalies and monitor their AI infrastructure and workloads throughout hybrid, multi-cloud environments.

The brand new OpsRamp operations copilot makes use of NVIDIA’s accelerated computing platform to research giant datasets for insights with a conversational assistant, boosting productiveness for operations administration. OpsRamp may also combine with CrowdStrike APIs so prospects can see a unified service map view of endpoint safety throughout their total infrastructure and purposes.

Speed up time to worth with AI — expanded collaboration with world system integrators

To advance the time to worth for enterprises to develop industry-focused AI options and use instances with clear enterprise advantages, Deloitte, HCLTech, Infosys, TCS and Wipro introduced their assist of the NVIDIA AI Computing by HPE portfolio and HPE Non-public Cloud AI as a part of their strategic AI options and companies.

HPE provides assist for NVIDIA’s newest GPUs, CPUs and Superchips

  • HPE Cray XD670 helps eight NVIDIA H200 NVL Tensor Core GPUs and is right for LLM builders.
  • HPE ProLiant DL384 Gen12 server with NVIDIA GH200 NVL2 is right for LLM shoppers utilizing bigger fashions or RAG.
  • HPE ProLiant DL380a Gen12 server assist for as much as eight NVIDIA H200 NVL Tensor Core GPUs is right for LLM customers on the lookout for flexibility to scale their GenAI workloads.
  • HPE can be time-to-market to assist the NVIDIA GB200 NVL72 / NVL2, in addition to the brand new NVIDIA Blackwell, NVIDIA Rubin and NVIDIA Vera architectures.

Excessive-density file storage licensed for NVIDIA DGX BasePOD and NVIDIA OVX methods

HPE GreenLake for File Storage has achieved NVIDIA DGX BasePOD certification and NVIDIA OVX™ storage validation, offering prospects with a confirmed enterprise file storage answer for accelerating AI, GenAI and GPU-intensive workloads at scale. HPE can be a time-to-market companion on upcoming NVIDIA reference structure storage certification packages.

Availability

  • HPE Non-public Cloud AI is predicted to be typically out there within the fall.
  • HPE ProLiant DL380a Gen12 server with NVIDIA H200 NVL Tensor Core GPUs is predicted to be typically out there within the fall.
  • HPE ProLiant DL384 Gen12 server with twin NVIDIA GH200 NVL2 is predicted to be typically out there within the fall.
  • HPE Cray XD670 server with NVIDIA H200 NVL is predicted to be typically out there in the summertime.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles