Hewlett Packard Enterprise has always taken an infrastructure-centric approach to delivering flexible, consumption-based compute, storage and networking to enterprise IT. HPE offers that flexibility through its popular HPE GreenLake product.
HPE rocked things up at the annual HPE Discover event in Las Vegas last week, when the company not only extended the definition of GreenLake, but brought a supercomputer to the party. And the first, HPE says, will be offering many domain-specific AI applications as a service. The move puts HPE directly ahead of his OEM competitors who only offer traditional infrastructure solutions and directly into the digital transformation path of enterprises.
Generative AI in the Enterprise
It remains to be seen where the series of activities around generative AI will ultimately lead us. Yet the technology is already changing the way companies across industries think about customer engagement, decision-making, marketing, workflow automation, and many other tasks. Innovation around Large Language Models (LLMs) such as ChatGPT is still in its infancy. The list of use cases grows daily.
There are two challenges for IT organizations adopting LLM. The IT infrastructure required to support LLM training is complex and expensive, and his existing LLM models are trained on an extensive data corpus, making it difficult to customize the technology for specific applications. There is a possibility. .
Solving the second problem, customizing the data set, often introduces the challenge of building and managing a complex AI infrastructure. These challenges must be addressed as LLM must be trained against organization-specific data in order to be fully relevant to the organization.
A growing number of cloud services offer ChatGPT-as-a-Service, including Microsoft’s Azure OpenAI Service and ChatGPT’s founders, OpenAI. Many cloud providers are also happy to rent out the infrastructure needed to train and operate language models at scale. Just pick your favorite CSP. This offering does not include enterprise-class, consumption-based services for LLM training and operations. That was until last week when HPE announced its new HPE GreenLake for large language models.
HPE GreenLake for large language models
HPE GreenLake for LLM gives users direct access to a pre-configured LLM stack running on multi-tenant Cray XD supercomputers. This will allow enterprises to privately train, tune, and deploy AI at scale without worrying about the underlying platform. This infrastructure can scale to thousands of CPUs and GPUs to meet customer needs. It’s almost a supercomputer as a service.
HPE GreenLake for LLM
Hewlett Packard Enterprise
Aleph Alpha Luminous Model
HPE worked with German-based AI company Aleph Alpha to provide users with pre-trained LLMs. Aleph Alpha’s Luminous Models allow organizations to harness their data by training and fine-tuning customized models. This allows LLMs to leverage corporate data. Luminous models are available in multiple languages including English, French, German, Italian, and Spanish.
Based on the Cray XD supercomputer
As far as most IT practitioners can remember, the world’s highest performing computer was made by Cray, which became part of HPE in 2019. The latest generation of Cray XD series supercomputers not only offer an unprecedented amount of raw computing power, they are also designed with scalability in mind. The Cray XD I/O subsystem ensures data flow between nodes and storage devices at a rate that the GPU running LLM training does not stall. Delivering this level of performance in a traditional enterprise data center is nearly impossible.
HPE GreenLake for LLM is based on unspecified Cray XD models with a pair of latest generation AMD EPYC processors. The solution also includes eight NVIDIA H100 Tensor Core GPUs and uses Cray’s high-performance, low-latency node-to-node interconnect.
NVIDIA H100 is currently the best solution for LLM training. In his MLPerf benchmark series released yesterday, the NVIDIA H100 set a performance record per accelerator for his MLPerf v3.0 ChatGPT benchmark. You don’t need to know what the underlying hardware is when you buy LLM as a service, but it’s good to know that HPE GreenLake builds its services on best-in-class technology. .
A fully tuned AI software stack
The LLM software story is as complex as the hardware, requiring expert composition of multiple layers of special-purpose frameworks, tools, and libraries. There is much to be done. Fortunately, HPE does the heavy lifting for us. HPE GreenLake for LLM includes today’s most popular AI tools and frameworks, plus the necessary NVIDIA software stack, all from HPE’s own HPE Machine Learning development environment, Machine Learning Data Management Software. , and works with HPE Ezmeral Data Fabric.
HPE GreenLake for LLM
Hewlett Packard Enterprise
Analyst view
HPE’s story with GreenLake is that it offers both traditional private cloud capabilities and hybrid cloud capabilities, complemented by what HPE calls a “flex solution” for horizontal and vertical workloads. His new GreenLake for LLM is the first of these flex solutions.
The company promises that GreenLake for LLM won’t be its last workload-as-a-service effort, hinting at future AI-as-a-Service offerings such as climate modeling, drug discovery, and financial services. please ask This expands HPE GreenLake from an infrastructure-centric start to a set of very cloud-like services.
HPE GreenLake Portfolio
Hewlett Packard Enterprise
I love how HPE embraces GreenLake. The company’s strategy is sound and the solution is desperately needed. Leveraging Cray assets for demanding AI workloads is very smart. But HPE forces us to think differently about our company. A traditional infrastructure company offering consumption-based services, or is HPE a new breed of cloud provider? The lines are blurring rapidly.
No other technology company offers anything comparable to what HPE offers with its enhanced GreenLake solution. HPE delivers the flexible, consumption-based infrastructure model promised by the cloud, on-premises or colocated. If you have a workload that requires more complex infrastructure than you can buy and maintain, HPE offers it as a GreenLake Flex solution. Our new GreenLake service for LLMs is just the first. I can’t wait to see what happens next.
Disclosure: Steve McDowell is an industry analyst and NAND Research is an industry analyst firm providing research, analysis and advisory services to many technology companies (which may include companies mentioned in this article). has been or has been engaged in McDowell has no equity positions in any of the companies mentioned in this article.
follow me twitter Or LinkedIn.
