Cisco claims that its AI chips G200 and G202 will be the “most powerful” networking chips to power AI/ML workloads. (Source – Cisco)
- Cisco claims that its AI chips G200 and G202 will be the “most powerful” networking chips to power AI/ML workloads.
- Cisco says the chips have been tested by five of the six major cloud providers.
Two months after Broadcom released a new chip to connect supercomputers for artificial intelligence (AI) work, Cisco Systems announced a similar one. The networking giant just launched a line of networking his chips tailored for AI supercomputers, a product that could easily compete with what Broadcom and Marvel his technology offers.
The networking chips G200 and G202 were announced on June 20th, three and a half years after the launch of Cisco Silicon One. In 2019, Cisco made headlines with the announcement of Cisco Silicon One, its foray into the merchant networking silicon business.
To explain what purpose Cisco Silicon One serves, most of today’s internet infrastructure is for VR/AR, AI, 5G, 10G, 16K streaming, Adaptive cybersecurity, quantum computing, etc. Cisco Silicon One is the network giant’s solution to this problem.
It is universally adaptable and programmable, and is intended to support service provider and web-scale market segments, as well as the needs of fixed and modular platforms. Rakesh Chopra, Cisco Fellow and former Principal Engineer, said: blog post.
He added that the expansion strengthens the Cisco Silicon One lineup, covering a wide range from 3.2 Tbps to 51.2 Tbps and offering an integrated architecture and software development kit to ensure seamless convergence without compromise.

New generations are typically released every 18 to 24 months, pace of innovation Develop twice as fast as normal silicon development. Cisco didn’t name it, but shared that the company’s SiliconOne series of chips are currently being tested by five of his six major cloud providers.
Cisco, AI, supercomputers
According to data collected by Synergy Research Group, the world’s four largest cloud providers include Amazon Web Services, Microsoft Azure, Google Cloud and Alibaba Cloud. Like Broadcom, Cisco is also a major supplier of networking equipment such as Ethernet switches that connect devices such as computers, laptops, routers, servers, and printers to local area networks.
With the rise of AI applications such as OpenAI ChatGPT and The Alphabet Bard, presents new challenges for networks within data centers. To come up with human-like answers to questions, such systems need to be trained on vast amounts of data. Unfortunately, the job is too big for his one computer chip.
Instead, jobs must be split across thousands of chips, called graphics processing units (GPUs), which act like one giant computer and can run for weeks, or even weeks. I have to process jobs for months. Thereby, How fast individual chips can communicate important.
Cisco Silicon One Product Family
When Broadcom announced its new chip in April, it said its new Jericho3-AI could connect up to 32,000 GPU chips. Cisco’s announcement this week also said the same, with the company’s latest generation Ethernet switches capable of connecting up to 32,000 GPUs. Cisco even emphasized that the G200 and G202 have doubled his performance compared to the previous generation.
“The G200 and G202 will be the most powerful networking chips on the market, facilitating AI/ML workloads and enabling the most power efficient networks,” said Chopra. He pointed out that the chip could allow him to perform AI and machine learning tasks with 40% fewer switches and less latency while being more power efficient.
Cisco’s G200 and G202 also dramatically reduce network costs, power consumption, and latency through breakthrough innovations. Broadcom’s Jericho3-AI chip, on the other hand, is intended to compete with another supercomputer networking technology called InfiniBand.
Besides Broadcom, Marvell Technology, which makes data center networking chips, is also seeing a surge in demand for its AI products. Marvell is the first industry-leading data infrastructure his silicon supplier to sample and release commercially. 112G SerDes is the leader of data infrastructure products Based on TSMC’s 5nm process.
“AI is emerging as a key growth driver for Marvel,” CEO Matt Murphy said last month. He added that while Marvell is still in the early stages of ramping up its AI production, it “expects AI revenue to at least double its previous year in fiscal 2024 and continue to surge in the coming years.”
