Microsoft announced that it has connected its large data centers in Wisconsin and Atlanta with a high-speed fiber optic network, creating what it calls the world’s first “global AI superfactory.”“The new system will link two locations located approximately 1,100 miles apart in five states and serve as a single integrated computing complex designed specifically for artificial intelligence. Unlike traditional cloud data centers that host millions of individual applications, the new facility is built to handle a single large-scale AI workload across multiple locations, Microsoft said.”Each data center contains hundreds of thousands of Nvidia GPUs connected through a high-speed network architecture known as an AI Wide Area Network (AI-WAN), allowing them to share processing tasks in real time. The company is also introducing a two-story data center design that packs GPUs more densely while reducing latency, supported by a closed-loop liquid cooling system that manages heat and energy usage.Microsoft says that by linking sites across regions, you can dynamically balance workloads, pool computing power, and distribute large amounts of power demand across the power grid, reducing your reliance on energy availability in one location. The combined system will be used to train and run next-generation AI models for key partners such as OpenAI, France’s Mistral AI, and Elon Musk’s xAI, as well as Microsoft’s own internal models.The initiative highlights the rapidly expanding investment in AI infrastructure among global technology giants. Microsoft spent more than $34 billion in capital spending last quarter, much of it in data centers and GPUs, as part of its long-term strategy to meet surging demand for AI.Read Microsoft CEO Satya Nadella’s messageToday, we announced our new Fairwater data center in Atlanta. The data center will connect with Wisconsin’s first Fairwater site and broader Azure footprint to create the world’s first AI superfactory.Fairwater embodies our vision for a fungible fleet. It’s an infrastructure that can deliver any workload anywhere, on the right accelerator and network path, with maximum performance and efficiency.AI workloads are evolving beyond extensive pre-training. It now includes fine-tuning, reinforcement learning (RL), synthetic data generation, evaluation pipelines, and more. Fairwater is built to support this entire lifecycle.Maximum Density: Fairwater’s two-story design and liquid cooling system allows racks to be arranged in three dimensions and packed with GPUs as densely as possible, minimizing cabling and increasing latency and effective bandwidth.Fleet: Each Fairwater DC can consolidate hundreds of thousands of the latest NVIDIA GPUs into a single, coherent cluster. This provides a flexible infrastructure that can support any workload and ensures that GPUs are not left idle unnecessarily.And on top of this, over 100,000 GB300s will come online this quarter alone for inference across the rest of the fleet. For us, it’s all about turning every gigawatt into the maximum number of useful tokens. Not all GWs are created equal.Global: All Fairwater DCs connect to previous generation AI supercomputers through a continent-spanning AI WAN, forming a truly fungible computing pool. This allows developers to scale beyond the capacity of a single site and dynamically place workloads on the appropriate infrastructure to meet their needs.These innovations enable disparate generations of silicon and AI systems across DCs and geographies to be brought together into a single resilient system that can scale seamlessly across training and inference workloads.And this flexible AI power is available alongside all the other cloud services (compute, storage, database, and app services) your AI agents and workloads require.This is what we mean when we talk about building fungible fleets, a single unified platform that pushes the boundaries of performance per watt and per dollar.
Amazon is building “Project Rainier”
Rivals are racing to keep up the pace. Amazon is building Project Rainier, a 1,200-acre complex of seven data centers in Indiana, and Google, Meta, OpenAI, and Anthropic are also making multibillion-dollar efforts in AI-centric infrastructure.Some analysts have warned that the scale of these investments could resemble a tech bubble if business customers fail to derive meaningful value from AI in the short term. But Microsoft and its peers argue that demand is sustainable, pointing to long-term contracts and rapid corporate adoption as evidence that the AI boom is far from speculative.
