core weave confirm a multi-year contract with human To support development and deployment of the Claude family A.I. model. The rollout will begin later this year, with computing power coming online in stages.
Anthropic uses CoreWeave’s cloud platform to run production-scale workloads. The company joins a growing group of AI model providers using CoreWeave infrastructure, with nine out of the 10 largest providers now joining the platform.
Michael Intrator, co-founder, CEO and chairman of CoreWeave, said: “AI is no longer just infrastructure, it’s turning models into platforms that have real-world impact. We’re excited to work with Anthropic at the center of where models work and perform in production. This is exactly the real-world deployment of AI that CoreWeave is built on.”
The agreement initially focuses on a phased rollout and may expand over time as demand increases.
Expanding meta-partnerships
CoreWeave also expanded its existing partnership with Meta through a long-term agreement valued at approximately $21 billion. The agreement runs until December 2032 and supports Meta’s AI development and inference workloads.
This infrastructure will be deployed in multiple locations, including initial use of NVIDIA’s Vera Rubin platform. The distributed setup is designed to support performance, resiliency, and scalability as workloads grow.
Intrator added: “This is another example of leading enterprises choosing CoreWeave’s AI Cloud to run their most demanding workloads.”
Growing demand for AI infrastructure
This announcement reflects the continued increase in demand for high-performance AI infrastructure, especially as organizations move from experimentation to production deployments.
CoreWeave positions its platform around performance and efficiency for modern AI workloads, including MLPerf benchmark performance and recognition in independent evaluations of AI cloud systems.
The scale of the meta deal, along with new partnerships such as Anthropic, signals increasing pressure on infrastructure providers to deliver the computing power that can support large-scale AI models across developers, startups, and enterprise environments.
