Designing fluid connections for AI-driven data centers

Here’s a look at liquid cooling components built for today’s increasing power loads.

TTI Inc. has sponsored this post.

(Stock image.)

With the advent of the GP200 and other AI chip platforms, power consumption in data centers has grown exponentially. The amount of heat generated by these chips can no longer be managed by fans alone. Microchips are now enveloped in cold plates and connected through hoses and liquid cooling connectors to an outer system—typically a coolant distribution unit (CDU)—to circulate the liquid before funneling it back through the server.

As these liquid cooling components take on greater design importance, manufacturers like Amphenol Industrial are advancing fluid connection technologies to meet the needs of this rapidly evolving space.


One of the main considerations for engineers designing liquid cooling systems is intermateability. This is where the Open Compute Project (OCP) comes in. OCP is a collaboration of thought leaders and subject matter experts within the data center, AI, server, and microchip industry. They meet weekly in different workstreams to create a universal spec that can be adopted globally.

For example, the Open Rack Version 3 (ORV3) is a specific server rack that’s designed with defined height, length, number of drawers, and the way liquid cooling flows through it. It is a standard that manufacturers follow so that data center infrastructure globally intersects.

Through OCP, the specs are constantly changing. The ORV3 is going through another iteration where it is getting 6 inches deeper and about 200 pounds heavier because a bus bar has been added, which is now being liquid cooled. These kinds of changes are driven by the next generation of AI chips, which require more power consumption and create more heat.

“Common challenges right now with quick disconnects (QDs) are figuring out how to create more flow with the same or smaller footprint, without having pressure drops,” says Albert Pinto, business development manager at Amphenol Industrial.

Amphenol Industrial’s Universal Quick Disconnect (UQD) follows OCP standards and specs. The first version used a slide latch, which could be hard to disconnect in tight spaces. The second version introduced a push-button latch, which follows the same diameter and spec but is easier to use. According to Pinto, the next version will have chamfers on the plug side of the socket to address friction. The series features a dry-break mechanism and a compact, low-profile latch design to minimize fluid loss and accidental activation.

UQD: The global standard for fast, secure liquid cooling connections in data centers. (Image: TTI/Amphenol Industrial.)

A second revision of the UQD/UQDB is also in development, aligned with the latest OCP specifications. This update introduces a new interoperability mode in which a UQD plug can be used with a UQDB socket.

In blind-mate systems, where trays slide in and connect without access to the rear of the cabinet, tolerance becomes a determining factor. “What’s interesting about the UQDB—Universal Quick Disconnect Blind Mate—is that it has 1 mm of radial flow misalignment tolerance,” says Pinto. “Even with slight misalignment, the tray will still engage and stay retained by the front of the server tray, whether by bolts or some kind of clamp.”

The next evolution of that design is the Blind Mate Quick Connect (BMQC). “The major difference is that it has 5 mm of radial and 2.7° angular misalignment tolerance,” says Pinto. “It has a much longer funnel and pin.”

The Pivot Blind Mate Coupling (PBMC) is still in development. It maintains a radial tolerance on the pin side but adds tolerance on the socket side as well, splitting the compensation between both ends and enabling a return to a smaller form factor.

Amphenol also plans to launch a new Large Quick Connector (LQC), compliant with OCP standards, in Q4 of this year.

“Liquid cooling in the connector side is evolving at breakneck speed,” Pinto says. “In AI, the heat requirements and space constraints are so demanding that there’s constant evolution in how to keep all these servers functioning by cooling them while maintaining a small footprint.”

Where Amphenol differentiates is in termination options, offering many variations in thread type, barb fits, and right-angle configurations. “One competitor has around 18 part numbers around UQDs and we have about 130,” says Pinto. “Also, our lead time is very competitive: six weeks at volume.”

The company also offers PTFE and EPDM hose solutions, along with custom manifolds for distributing coolant. “We provide complete solutions — from connectors and hoses to fully customized manifold designs for every coolant distribution point in a data center, including CDUs, server racks, immersion cooling tanks, rear‑door heat exchangers and in‑row cooling systems — as well as our standard Blind Mate Manifold (ORV3), an OCP‑compliant design,” says Pinto.

Pinto encourages engineers designing liquid cooling components to immerse themselves in OCP, especially if they work in the AI space.

“Stay in the know, because this industry is moving faster than anything I’ve ever seen,” says Pinto. “Things are becoming obsolete within months because there’s such a higher demand for cooling.”

To learn more about Amphenol Industrial, at TTI.com.