New interoperability allows non-Nvidia processors to tap into previously exclusive high-speed interconnect
At this year’s Computex hardware event, Nvidia (Nasdaq: NVDA) made waves by announcing a significant expansion of its NVLink interconnect technology. The newly introduced NVLink Fusion will make this ultra-fast communication protocol available beyond Nvidia’s own chips, extending compatibility to third-party hardware, including processors from AMD and Intel.
Historically, NVLink was a closed-loop solution designed specifically for linking Nvidia’s own GPUs and CPUs. Originally developed by the company’s Mellanox division, NVLink enables multiple GPUs within a server or data center rack to function collectively, pooling computational power and memory resources. The result is a seamless multi-GPU architecture that appears as a unified processing unit to the system.
Now in its fifth iteration, NVLink delivers an impressive 1.8 terabytes per second (TB/s) of two-way bandwidth for each GPU—a stark contrast to PCIe Gen 5’s 128 GB/s ceiling. This performance leap supports configurations with up to 72 interconnected GPUs in a single rack.
With NVLink Fusion, Nvidia is loosening the reins by allowing non-Nvidia accelerators to communicate using this powerful interconnect. According to the company, there will be two deployment modes: one enabling integration between custom CPUs and Nvidia GPUs, and another facilitating connections between Nvidia’s own Grace CPU line (and successors) and third-party accelerators.
“This move gives partners greater flexibility while strengthening our ecosystem,” said Dion Harris, Nvidia’s Senior Director of HPC, Cloud, and AI, during a pre-Computex media briefing. “By combining our extensive partner network with NVLink Fusion, developers can bring AI-scale computing systems to market faster.”
However, the open-door policy has its limits. Nvidia still requires that any NVLink Fusion implementation involve at least one Nvidia component—meaning, for instance, AMD can’t use it to directly connect its Epyc CPUs to Instinct GPUs.
That hasn’t deterred early adopters. Companies like MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence have already signed on as NVLink Fusion licensees.
Meanwhile, competition is brewing. A rival initiative, the Ultra Accelerator Link (UALink) consortium, recently introduced the UALink 200G 1.0 Specification—a collaborative, multi-vendor alternative offering a high-bandwidth, low-latency connection standard tailored for AI and high-performance computing environments.
As Nvidia repositions NVLink to be more inclusive—albeit still Nvidia-centered—it underscores the company’s strategy to stay indispensable in an increasingly modular and collaborative AI hardware ecosystem.