As artificial intelligence workloads continue to push the thermal limits of traditional data center cooling methods, major industry players are pivoting toward liquid-based solutions. Amazon Web Services (AWS) and Equinix are among the latest to implement advanced cooling technologies to keep up with the heat generated by powerful AI chips.
Equinix Integrates Two-Phase Cooling for Real-World Testing
Equinix, a global leader in cloud infrastructure and colocation services, has taken a major step by integrating Accelsius’s NeuCool IR80 system. This direct-to-chip, two-phase liquid cooling solution is scheduled for deployment in Q3 2025 at Equinix’s Co-Innovation Facility (CIF), located within the DC15 International Business Exchange (IBX) data center in Ashburn, Virginia.
The CIF was designed to provide a collaborative environment where Equinix and its partners can test and showcase emerging technologies. According to Accelsius CEO Josh Claman, the real benefit of this collaboration lies in giving customers a firsthand experience with next-gen cooling systems. “Bringing the tech to life helps potential users understand its practical value,” Claman noted.
A unique feature of Accelsius’s technology is its ability to operate effectively with slightly elevated water temperatures. This approach leverages the tolerance of modern processors, which can handle increases of 6–8°C without performance degradation. The use of warmer coolant minimizes the need for energy-intensive cooling compressors, leading to greater energy efficiency.
Equinix’s interest in this system also stems from its involvement in the COOLERCHIPS initiative—an ARPA-E program launched by the U.S. Department of Energy in 2023. The program’s aim is to lower cooling energy consumption in data centers to under 5% of total IT power usage.
AWS Optimizes GPU Performance with Modular Liquid Cooling
Meanwhile, AWS has tailored its own cooling approach to support its most demanding AI hardware. The company introduced In-Row Heat Exchangers (IRHX), a custom-engineered system built to accommodate Nvidia’s Blackwell GPUs—currently some of the most thermally demanding chips used for AI model training and inference.
The IRHX setup includes three core parts: a centralized water distribution cabinet, an integrated pump system, and modular fan-coil units arranged within server rows. Like Equinix’s implementation, AWS uses direct liquid contact through cold plates mounted on chips to draw away heat. The warmed liquid circulates through heat exchanger coils, where fans cool it, similar to how automotive radiators operate.
While direct-to-chip cooling isn’t new—vendors like CoolIT, Delta Electronics, Motivair, and Vertiv have long offered similar solutions—AWS introduces greater modularity. By decoupling pumps from fan units, AWS enables one pumping station to handle multiple fan modules. This modularity means cooling capacity can scale alongside server density and thermal load, providing AWS with flexible, row-level cooling control across data center sites.