HPE Unveils AI Cloud Designed for Large Language Models

AI and ML Jobs


As announced at HPE Discover 2023, Hewlett Packard Enterprise expands its HPE GreenLake portfolio to enter the AI ​​cloud market.

The HPE GreenLake portfolio provides a large-scale language model that enables on-demand access to multi-tenant supercomputing cloud services for any enterprise, from start-ups to Fortune 500 companies.

Introducing HPE GreenLake for Large Language Models (LLM), enabling enterprises to privately train and tune AI at scale using a sustainable supercomputing platform that combines HPE AI software and supercomputers. , so that it can be expanded.

HPE GreenLake for LLM is delivered with HPE’s first partner, German AI start-up Aleph Alpha, to power use cases requiring text and image processing and analytics, with field-proven Provide users with out-of-the-box LLM.

HPE GreenLake for LLM is the first in a series of future industry and domain-specific AI applications from HPE. These applications include supporting climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.

Antonio Neri, President and CEO of HPE, said: “The AI ​​market is reaching a generational shift that will be as transformative as web, mobile and cloud.”

“HPE has grown into what was once well-funded government labs and global cloud computing by providing large-scale language models and a wide range of AI applications running on HPE’s proven and sustainable supercomputers. We are making AI accessible to everyone, which used to be the domain of the big players.”

“Organizations can now leverage AI to drive innovation, disrupt markets, and achieve breakthroughs with on-demand cloud services that train, tune, and deploy models responsibly at scale. You can,” says Neri.

Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLM is AI-native designed to run a single large-scale AI training and simulation workload with total computing power. It runs on architecture. The product supports AI and HPC jobs on hundreds or thousands of CPUs or GPUs simultaneously.

HPE GreenLake for LLM includes access to Luminous, Aleph Alpha’s pre-trained large-scale language model delivered in multiple languages ​​including English, French, German, Italian and Spanish .

With LLM, customers can leverage their data to train and fine-tune customized models to gain real-time insights based on their own knowledge.

Additionally, it will enable companies to build and sell a variety of AI applications, integrate them into their workflows, and unlock business and research-driven value.

Jonas Andrulis, Founder and CEO of Aleph Alpha, said: “Using HPE supercomputers and AI software, we efficiently and quickly trained Luminous, a large-scale language model that can be used as a digital assistant by critical businesses such as banks, hospitals, and law firms. It speeds up decision-making and saves time and resources.”

“We are proud to be the launch partner of HPE GreenLake for Large Language Models, expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a service to our end customers, We look forward to facilitating new applications for businesses,” said the research effort. ”

HPE GreenLake for LLM runs on HPE Cray XD supercomputers and is available on demand. This eliminates the need for customers to purchase and manage supercomputers, which are typically expensive, complex and require specific expertise.

It leverages the HPE Cray programming environment, a fully integrated software suite for optimizing HPC and AI applications, providing a complete set of tools for developing, porting, debugging, and tuning code. I have it.

Additionally, the supercomputing platform provides support for HPE’s AI/ML software. This includes HPE machine learning development environments for rapidly training models at scale and HPE machine learning data management software for integrating, tracking, and auditing data with reproducible AI capabilities. increase. Accurate models you can trust.

HPE GreenLake for LLM will run in colocation facilities such as QScale in North America as the first region to offer a purpose-built design to support supercomputing scale and capacity with near 100% renewable energy.

HPE is currently accepting orders for HPE GreenLake for LLM and is expected to begin shipping in North America by the end of calendar year 2023, with additional shipments beginning, and shipments to Europe beginning early next year.

In addition to introducing HPE GreenLake for LLMs, HPE announces expansion of AI inference computing solutions to accelerate time to value across industries including retail, hospitality, manufacturing, media and entertainment Did.

These systems are tuned to target edge and data center workloads such as computer vision at the edge, generative visual AI, and natural language processing AI.

These AI solutions are based on new HPE ProLiant Gen11 servers, purpose-built to integrate advanced GPU acceleration essential for AI performance. HPE ProLiant DL380a and DL320 Gen11 servers improve AI inference performance by more than 5x over previous models.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *