Philadelphia and Stamford, Connecticut —In key examples of how carriers are using AI to improve operations and develop new services, Comcast and Charter made separate announcements about testing and deploying AI processing using NVIDIA technology.
The announcement shows how carriers hope to test real-time AI applications that run in milliseconds from their customers, enabling faster, more responsive experiences for the next wave of AI and improving performance for low-latency, compute-intensive applications such as animated film production, gaming, and advertising in their networks.
Such applications could also provide a competitive advantage as cable operators and 5G fixed wireless compete for broadband customers.
Article continues below
In its announcement, Charter’s Spectrum also highlighted how implementing AI technology into its network has the potential to significantly improve and speed up the creation of animated films and special effects. Spectrum’s network serves Hollywood and Los Angeles, where many movies and special effects are produced.
Both announcements were made at NVIDIA’s GTC event in San Jose on March 17th.
In an announcement, Comcast called the introduction of NVIDIA AI “a groundbreaking initiative that brings AI processing with NVIDIA GPUs closer to customers than ever before, accelerating the development of next-generation AI applications across America. The first-of-its-kind collaboration will test the performance of AI workloads running directly at the edge of Comcast’s network, in regional facilities just milliseconds from where customers live and work.”
“The industry is moving to a more distributed AI infrastructure, and Comcast now operates the network to support it,” said Comcast Chief Network Officer Elad Nafsi. “The NVIDIA AI Grid vision requires intelligent infrastructure delivered right to customers’ doorsteps. By deploying NVIDIA GPUs directly to the edge cloud, we can explore what’s possible when AI inference takes just milliseconds from the end user.”
Ronnie Vasishtha, senior vice president of AI and communications at NVIDIA, said: “Distributed AI grids are the next big opportunity for the telecommunications industry, and Comcast’s deep, national network is the perfect place to build them.” “By bringing intelligent AI inference to the network edge, Comcast can unlock essential cost efficiencies while delivering deterministic, low-latency experiences to customers at massive concurrency. This collaboration will power the next generation of hyper-personalized experiences that run from users in just milliseconds.”
The demonstration leverages Comcast’s nationally distributed architecture, which serves 65 million homes and businesses, to demonstrate how AI at the network edge can deliver faster, smarter, and more responsive experiences. Comcast says this will result in faster apps, more relevant recommendations, smoother games, and instantly responsive AI-powered tools.
Comcast also noted that by deploying advanced DOCSIS 4.0 FDX nodes, smart amplifiers, and intelligent gateways across its footprint, it can support real-time AI inference at scale that is not possible with traditional centralized, fiber-only, or wireless networks.
As more AI workloads move from distant data centers to local edge locations, Comcast said its architecture positions it as a key contributor to the emerging AI grid for next-generation AI-driven services.
Comcast will initially focus on three use cases aimed at demonstrating the benefits of running AI workloads at the network edge.
- Personalized advertising agency. It is an advanced ad serving engine powered by the Descart real-time AI video model. Decart’s technology can customize video ads down to the household level using attributes such as language, content preferences, household size, and other non-sensitive demographic categories, creating highly relevant experiences for viewers while increasing efficiency for advertisers.
- Small business concierge agent. Leveraging Personal AI’s small language model (SLM) and memory platform deployed on HPE ProLiant servers, it provides an AI-powered “front desk” service that can greet customers, manage appointments, answer questions, and support small and medium-sized businesses in their daily operations.
- Reduce game lag. Delivering ultra-low-latency streaming for online games, AI Grid brings GPU resources physically closer to players. This significantly improves responsiveness and overall gameplay quality, building on the low-latency technology Comcast rolled out last year for NVIDIA GeForce NOW and other applications.
In a separate announcement from Charter’s Spectrum, the operator said it is using the NVIDIA AI Grid reference design on Spectrum’s fiber broadband network to deploy remote graphics processing units (GPUs) at the network edge to support latency-sensitive applications and compute-intensive use cases.
Specifically, Spectrum is demonstrating a use case for an enterprise-grade, low-latency, remote NVIDIA AI infrastructure built on NVIDIA RTX6000 PRO Blackwell Server Edition technology and a distributed AI grid.
Spectrum reports that the solution will enable animation artists to render blockbuster-level CGI using GPU computing resources located near the edge of Spectrum’s fiber-powered broadband network. Spectrum’s ECI’s proximity to the studio, combined with its 100 Gbps low-latency fiber network, extends the power of the NVIDIA AI Grid to remote workstations.
Creating a movie means rendering hundreds of thousands of images or “frames” and stitching them together to create an entire story. Developing each frame requires a huge amount of processing power. Coupled with the fact that centralized cloud environments can introduce delays that impact time-sensitive graphics processing and AI workloads, this can be a difficult technical problem to solve.
Spectrum said that by bringing NVIDIA RTX PRO 6000 Blackwell GPUs to the edge of the Spectrum network, customers will gain faster and more reliable access to NVIDIA AI infrastructure, allowing CGI artists to create large-scale visual stories more efficiently.
“Spectrum is supporting the next wave of enterprise workloads by providing connectivity and infrastructure for real-time applications,” said Rich DiGeronimo, Spectrum’s president of products and technology. “Our advantage is a footprint of more than 1 million miles of infrastructure delivering gig-plus speeds to tens of millions of residential, business, and enterprise customers. We have the scale to deliver the speed, low latency, and reliability that higher-performance GPU and AI applications require. Our work with NVIDIA will expand how connectivity companies can deliver real-time graphics rendering. It shows how we can bring performance closer to the level required not just in the entertainment industry, but in any industry.”
More specifically, Spectrum said the deployment leveraged NVIDIA’s AI Grid reference design. This provides operators with an integrated hardware and software platform to build, deploy, and manage GPUs and AI across distributed sites.
The collaboration leverages Spectrum’s fiber broadband network and ECI, with the ability to scale to hundreds of MW of power from more than 1,000 edge data centers and hubs located less than 10 milliseconds, and in some cases less than 5 milliseconds, of the 500 million devices in homes and businesses connected to Spectrum’s network.
This is important because new GPU and AI-native apps must run with predictable latencies, support high concurrency, and deliver the best cost per token at scale. This initial deployment introduces an extremely low-latency application with a unique distributed edge GPU solution.
“The shift to real-time, AI-native applications is driving demand for distributed infrastructure that can deliver predictable, low-latency at scale,” said Chris Penrose, global VP of communications at NVIDIA. “Spectrum’s fiber network and edge computing infrastructure expands the power of the NVIDIA AI Grid to deliver performance where it’s needed most, right where movies are made.”
