Image credit: Microsoft
Microsoft Data Center Site Promo

Ethernet Switch Silicon Doubles Bandwidth to 51.2 Tb/s

Aug. 19, 2022
Broadcom's Tomahawk 5 switch ASIC packs 512 high-performance 100G PAM4 SerDes transceivers.

Broadcom rolled out a 51.2-Tb/s switch ASIC suited for the colossal data centers used by cloud and other technology giants, promising what it touts as twice the bandwidth of any other switch silicon on the market.

The Santa Clara, California-based company said the Tomahawk 5 promises to deliver double the bandwidth of its predecessor, the Tomahawk 4 unveiled in 2020, while reducing the power and latency when it comes to moving data around data centers. The BCM78900 is the first in the Tomahawk 5 family, bringing L2 and L3 switching and routing features that keep it a step ahead of even bandwidth-hungry hyperscalers like Amazon, Google, and Microsoft.

The amount of data feeding into data centers from smartphones and the Internet of Things—so-called “north-south” data traffic—continues to grow. But information moving “east-west” inside data centers is also increasing, partly because companies are feeding it into artificial-intelligence (AI) software for the purposes of training and inferencing.

Broadcom is the reigning champ in the market for networking chips used in Ethernet switches that call the shots in the sprawling networks inside data centers. But it is facing increasing competition from rivals such as Intel, Marvell, and NVIDIA investing in the segment, and even Cisco with its new in-house switch silicon.

Since it introduced the first Tomahawk ASIC in 2014, Broadcom has doubled the bandwidth every 18 to 24 months. The new Tomahawk 5 switch puts Broadcom on a collision course with NVIDIA. Earlier this year, NVIDIA introduced its 51.2-Tb/s Spectrum-4 Ethernet switch, with the 4-nm chip due to begin shipping in 2023.

Broadcom expects customers to use the Tomahawk 5 to build out Ethernet switches containing 64 ports running at 800 Gb/s, 128 ports at 400 Gb/s, or 256 ports at 200 Gb/s, starting early next year. 

The switch comes packed with high-density I/O surrounding the main packet-processing engines, featuring up to 512 of its high-performance 100G PAM4 SerDes. Thus, it maintains the same port speeds as the Tomahawk 4 switch while doubling bandwidth to 51.2 Tb/s. 

Pumping Up the Power Efficiency

The company pins the power-efficiency gains in the Tomahawk 5 on various improvements to the architecture at the heart of the new chip, coupled with a move to a more advanced 5-nm process node.

Power efficiency is a top priority in the world of cloud computing. It helps keep the sector’s share of global electricity demand in check and increases carbon savings for the vast operations of Amazon, Google, Microsoft, and other cloud giants. Data centers are on pace to consume 15% of the world’s electricity by 2025, according to estimates by Applied Materials. AI—specifically, training it—also burns through lots of power.

Ram Velaga, senior vice president of the core switching business at Broadcom, said that a single Tomahawk 5 replaces 48 of the Tomahawk 1 switches released in 2014, resulting in a 95% drop in power requirements.

Broadcom said Tomahawk 5 comes with a host of new features. Many are formulated to speed up AI workloads underserved by networking hardware used in general compute and storage.

Because training AI software is computationally heavy, companies split the work up over many servers. The servers, in turn, must connect to one another using high-bandwidth, low-latency networks that rarely fail.

Dealing with the Data Deluge

According to Broadcom, the Tomahawk 5 switch includes a new feature called “cognitive routing” that automatically and dynamically redirects data traversing the switch to the most lightly loaded parts of the network. This helps improve performance, as each network link in a data center can only handle a limited amount of information at once.

The real-time dynamic “load balancing” feature in the Tomahawk 5 switch can also determine when network links are experiencing issues and then reroute data to other links to reduce the risk of outages. By routing around potential traffic jams in the network, the switch can reduce latency and improve performance for AI workloads, said Broadcom. It also monitors the health of links in hardware and automatically steers traffic away from failed links.

Advanced shared packet buffering and hardware-based link redundancy round out the Tomahawk 5.

The Tomahawk 5 architecture is also specifically designed to help get congestion under control in the network. To help keep the network links in between servers from getting clogged, special-purpose software—that tends to run in networking server cards known as NICs—is used to collect data about traffic traveling on the network and identify instances where data is delayed. Like a traffic cop, the software steps in to automatically fix the problem.

According to Broadcom, the Tomahawk 5 contains a cluster of 6x Arm CPU cores that makes it easier to collect the data companies use to keep networks running smoothly. They also handle tasks such as high-bandwidth programmable streaming telemetry.

To help rein in connectivity costs, the Tomahawk 5 supports a 100G PAM4 interface to direct attach copper (DAC), pluggable optics, and co-packaged optics. The flexible, long-reach SerDes at the heart of the chip connects to all devices within a rack—and even between racks—without the use of retimers or other active components, said Broadcom. It can interface directly with a broad ecosystem of standard pluggable optical modules.

Broadcom also plans to co-package a Tomahawk 5 chiplet with optics using its Silicon Photonics Chiplets in Package (SCIP) platform at some point in the future to reduce the power required for optical connectivity by 50%.

Software compatibility is maintained across the StrataXGS portfolio to simplify customer designs.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!