AMD has unveiled a major milestone in networking for AI-scale infrastructure: the Pensando Pollara 400GbE, the world’s first network interface card (NIC) designed in accordance with the Ultra Ethernet Consortium (UEC) 1.0 standard.
This move signals AMD’s growing push into high-performance connectivity solutions aimed at supporting the increasing demands of AI workloads and hyperscale computing environments.
A Quiet Breakthrough Amid Bigger Headlines
While most headlines at AMD’s recent Advancing AI event focused on the Instinct MI350 GPU line and the upcoming MI400X, an equally significant reveal flew under the radar: the official release of Pollara, a NIC built specifically for AI cluster environments. Originally announced in 2024, the card is now officially shipping — and it’s already set to be followed by a more powerful successor, the Vulcano 800GbE NIC.
What Sets Pollara Apart
The Pollara 400GbE is purpose-built for AI-scale networks requiring low latency and high throughput. It features:
Programmable RDMA transport
Hardware-based congestion control
GPU-to-GPU communication with intelligent routing
Compatibility with RoCEv2
Interoperability with other network cards
In essence, it’s AMD’s response to Nvidia’s NVLink c2c — but built on standard Ethernet.
The Ultra Ethernet Consortium and AMD’s Role
Pollara’s launch aligns with the UEC’s official release of its version 1.0 specification. The Ultra Ethernet Consortium, created in 2023 under the Linux Foundation, is a collective of leading tech companies — including Intel, Nvidia, Microsoft, Meta, Broadcom, Cisco, and Google — working to redefine networking protocols for AI and HPC.
The UEC spec supports high-bandwidth data movement across CPUs, GPUs, and accelerators, while offering a more scalable and open alternative to proprietary fabrics.
Vulcano and Helios: The Next Generation
Alongside Pollara, AMD also previewed Vulcano, its 800Gb/s next-gen NIC, which supports dual interfaces to directly connect both CPUs and GPUs. Designed for extreme scale, it complements AMD’s newly announced Helios rack, a custom AI system architecture where every GPU is linked via UA over Ethernet, effectively turning massive GPU clusters into a unified compute fabric — similar to Nvidia’s NVL-72.
Oracle Backs AMD’s AI Networking Strategy
Oracle is the first major cloud provider to adopt Pollara and Helios, signaling early momentum. Though Oracle trails behind giants like AWS and Azure with just 3% market share, its early support suggests AMD’s Ethernet-based AI networking approach could gain broader traction in the near future.