Stock Markets February 10, 2026

Cisco Introduces Silicon One G300 to Accelerate AI Data Center Networking

New 3nm switch chip and router aim to boost interconnect efficiency and compete with Broadcom and Nvidia in AI infrastructure

By Avery Klein CSCO
Cisco Introduces Silicon One G300 to Accelerate AI Data Center Networking
CSCO

Cisco unveiled the Silicon One G300 switch chip and an associated router designed to improve data movement across very large AI deployments. Built on TSMC's 3-nanometer process and slated to reach the market in the second half of the year, the G300 includes latency- and congestion-mitigation features Cisco calls "shock absorbers" and is claimed to speed certain AI workloads by up to 28 percent through rapid microsecond-scale rerouting.

Key Points

  • Cisco launched the Silicon One G300 switch chip and a router aimed at improving data movement inside large AI data centers.
  • The G300 will be fabricated on TSMC's 3-nanometer process and is expected to ship in the second half of the year.
  • Cisco says the chip's "shock absorber" features and microsecond-scale automatic rerouting can make some AI workloads up to 28% faster; this development directly targets the AI networking market where Broadcom and Nvidia also compete.

SAN FRANCISCO, Feb 10 - Cisco Systems has introduced a new switch chip and router that the company says are intended to improve data throughput inside extremely large AI data centers. The Silicon One G300, which Cisco expects to begin selling in the second half of the year, is designed to move traffic among the processors that train and serve AI models across hundreds of thousands of links.

The chip will be manufactured using Taiwan Semiconductor Manufacturing Co's 3-nanometer process. Cisco described several new features it calls "shock absorbers" that are meant to prevent networks of AI accelerators from becoming congested when they experience sudden, high-volume spikes in traffic, Martin Lund, executive vice president of Cisco's common hardware group, told Reuters.

According to Cisco, the G300 can make some AI computing tasks complete 28 percent faster. Part of that gain comes from the chip's ability to re-route traffic automatically around points of congestion or failure within microseconds, reducing end-to-end inefficiencies when tens of thousands or hundreds of thousands of connections are active.

"This happens when you have tens of thousands, hundreds of thousands of connections - it happens quite regularly," Lund said. "We focus on the total end-to-end efficiency of the network."

The product announcement positions Cisco directly against other suppliers targeting data center networking for AI workloads. Nvidia has recently included a networking chip among the key components of its newest systems, while Broadcom markets its Tomahawk family of chips to the same addressable market.


Market context

Cisco framed the G300 as a component intended to help data center operators manage the high interconnect demands of large-scale AI systems. The company linked the product to the broader AI infrastructure market, where sizeable capital spending is focused on speeding communication between compute elements.

Product timing and manufacturing

  • The Silicon One G300 is expected to be commercially available in the second half of the year.
  • Production will use TSMC's 3-nanometer chipmaking technology.

Performance claims

Cisco states some AI tasks may run up to 28 percent faster due to the chip's traffic management capabilities, which include microsecond-scale automatic rerouting to avoid congestion.


Note on limitations: The article reports Cisco's stated expectations and product plans as described by company executives. Numbers and performance claims reflect Cisco's public statements and product positioning.

Risks

  • Performance and timing claims are based on Cisco's statements; actual customer outcomes and shipping schedules may vary, affecting the product's competitive impact - this affects data center operators and networking vendors.
  • The G300 will compete in a market where established alternatives from Broadcom and networking components in Nvidia's systems exist, introducing uncertainty about customer adoption and market share - this impacts semiconductor suppliers and AI infrastructure spending.
  • The benefits cited depend on network-scale behaviors described by Cisco, such as handling spikes across tens of thousands to hundreds of thousands of connections; actual performance under diverse customer deployments may differ, influencing procurement decisions within hyperscalers and cloud providers.

More from Stock Markets

Rolls-Royce Poised to Announce Up to £1.5 Billion Share Buyback Alongside Annual Results Feb 22, 2026 DAE Capital Nears Purchase of Macquarie AirFinance, Sources Say Feb 22, 2026 S&P 500 Shows Signs of Tightening Range; Strategist Sees Potential for a Big Move Feb 22, 2026 Supreme Court to Clarify Reach of Helms-Burton Act in Multi-Billion Dollar Cuba Claims Feb 22, 2026 Switzerland Pulling Ahead in Early Economic Gains from AI Feb 22, 2026