Arm-based superchip and BlueField-3 DPU power innovative architecture to enable generative AI-driven wireless communication
Computex—NVIDIA and SoftBank Corp. today announced NVIDIA GH200 Grace Hopper™ Super Chip And SoftBank plans to roll it out in new distributed AI data centers across Japan.
To pave the way for rapid global deployment of generative AI applications and services, SoftBank will work with NVIDIA to build data centers capable of hosting generative AI and wireless applications on multi-tenant common server platforms . This reduces costs and provides more benefits. energy efficiency.
the platform uses the new NVIDIA MGX™ Reference Architecture Powered by the Arm Neoverse-based GH200 superchip, it is expected to improve performance, scalability, and resource utilization for application workloads.
“As we enter an era in which AI and society coexist, demand for data processing and electricity will increase rapidly. SoftBank Corp. President and CEO Junichi Miyagawa said, We will provide next-generation social infrastructure that supports a digital society.By collaborating with NVIDIA, our infrastructure will be able to achieve significantly higher performance by utilizing AI, including RAN optimization. We also expect to help build networks of interconnected data centers that can be used to reduce energy consumption, share resources, and host various generative AI applications.”
NVIDIA founder and CEO Jensen Huang said: “The demand for accelerated computing and generative AI is fundamentally changing data center architectures.” “The NVIDIA Grace Hopper is an innovation designed to process and scale out generative AI services. Like other visionary efforts to date, SoftBank is a world leader in building telecommunications networks built to host generative AI services.”
The new data centers will handle both AI and 5G workloads, more evenly distributed across the footprint than those previously used. This results in lower latency, significantly lower overall energy costs, and the ability to operate better at peak capacity.
SoftBank is looking to create 5G applications for autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins.
Virtual RAN for record-breaking throughput
NVIDIA Grace Hopper and NVIDIA BlueField®-3 data processing unit Accelerate software-defined 5G vRAN and generative AI applications without custom hardware accelerators or dedicated 5G CPUs. moreover, NVIDIA Spectrum Ethernet Switch BlueField-3 provides a precision timing protocol for 5G.
Based on published data on 5G accelerators, the solution delivers breakthrough 5G speeds in NVIDIA-accelerated 1U MGX-based server designs, delivering industry-leading throughput of 36 Gbps downlink capacity . Carriers have struggled to achieve such high downlink capacity using industry standard servers.
New reference architecture
NVIDIA MGX enables system manufacturers and hyperscale customers to power a wide range of AI, HPC, and NVIDIA Omniverse™ application.
by incorporating NVIDIA Aerial™ software For high-performance, software-defined, cloud-native 5G networks, these 5G base stations enable operators to dynamically allocate computing resources and achieve 2.5X power efficiency compared to the competition. increase.
“The future of generative AI requires high-performance, energy-efficient computing like NVIDIA’s Arm Neoverse-based Grace Hopper superchip,” said Rene Haas, CEO of Arm. “Combining Grace Hopper with his NVIDIA BlueField DPU will enable the new SoftBank 5G data center to run the most demanding compute and memory-intensive applications, enabling software-defined 5G and AI on Arm. It brings a dramatic efficiency improvement to
