SAN FRANCISCO, Feb 10 - Cisco Systems has introduced a new switch chip and router that the company says are intended to improve data throughput inside extremely large AI data centers. The Silicon One G300, which Cisco expects to begin selling in the second half of the year, is designed to move traffic among the processors that train and serve AI models across hundreds of thousands of links.
The chip will be manufactured using Taiwan Semiconductor Manufacturing Co's 3-nanometer process. Cisco described several new features it calls "shock absorbers" that are meant to prevent networks of AI accelerators from becoming congested when they experience sudden, high-volume spikes in traffic, Martin Lund, executive vice president of Cisco's common hardware group, told Reuters.
According to Cisco, the G300 can make some AI computing tasks complete 28 percent faster. Part of that gain comes from the chip's ability to re-route traffic automatically around points of congestion or failure within microseconds, reducing end-to-end inefficiencies when tens of thousands or hundreds of thousands of connections are active.
"This happens when you have tens of thousands, hundreds of thousands of connections - it happens quite regularly," Lund said. "We focus on the total end-to-end efficiency of the network."
The product announcement positions Cisco directly against other suppliers targeting data center networking for AI workloads. Nvidia has recently included a networking chip among the key components of its newest systems, while Broadcom markets its Tomahawk family of chips to the same addressable market.
Market context
Cisco framed the G300 as a component intended to help data center operators manage the high interconnect demands of large-scale AI systems. The company linked the product to the broader AI infrastructure market, where sizeable capital spending is focused on speeding communication between compute elements.
Product timing and manufacturing
- The Silicon One G300 is expected to be commercially available in the second half of the year.
- Production will use TSMC's 3-nanometer chipmaking technology.
Performance claims
Cisco states some AI tasks may run up to 28 percent faster due to the chip's traffic management capabilities, which include microsecond-scale automatic rerouting to avoid congestion.
Note on limitations: The article reports Cisco's stated expectations and product plans as described by company executives. Numbers and performance claims reflect Cisco's public statements and product positioning.