AI chip density has fundamentally reshaped the thermal characteristics of the data hall.  LLM training factories create long plateaus of high heat, while inference factories can produce dramatic swings in thermal output. U.S. data center electricity demand is on track to rise roughly 50% from 2025 to 2027, a trajectory that signals severe grid pressure for hyperscale markets.

Against this backdrop, cooling is emerging as one of the most consequential variables operators can influence. Cooling systems account for up to 40% of a data centers total energy, which is energy that could otherwise power AI compute, making thermal performance inseparable from capacity, efficiency, and growth planning.

Across AI and high-density environments, systems will continue to change, but the underlying requirement remains constant: significantly greater thermal and energy management supported by more sophisticated controls. Johnson Controls is building the infrastructure required to scale AI reliably, efficiently, and at pace, starting where performance constraints are felt first: inside the data hall.

From chip to ambient designing thermal performance for the AI era

As AI rack densities surge, operators are shifting from traditional air cooling toward liquid-based architectures. Modern thermal management begins at the chip, where heat is first captured, often through cold plates that bring liquid directly to the GPU surface. From there, the thermal chain moves through transfer elements such as Johnson Controls Silent-Aire Coolant Distribution Units (CDUs), which circulate liquid between the chip-level capture devices and the facility’s higher-level cooling loops, and ultimately to high-efficiency chillers.

This chip-to-ambient approach is now essential as conventional air systems can no longer manage the heat intensity or rapid thermal fluctuations created by high-density AI workloads. Read more here.

Coordinating thermal performance for scale

Operators face a systemic thermal challenge, not a component level one, so must implement end to end management of the thermal chain, monitoring each stage and managing overall PUE, WUE and resiliency.

High density AI racks can create intense, rapidly changing heat loads that must be captured, transferred, and rejected as part of a coordinated chain. That chain begins at the chip, where the flow of coolant to the cold plate manifolds is controlled by Johnson Controls Silent-Aire CDUs. When combined with a YORK YVAM chiller, the transfer and rejection phases can also be effectively managed, and operators can achieve PUE performance near, and in some climates below,1.2.

A CDU platform that scales from ~500 kW to over 10 MW, supports in row, perimeter, and hybrid topologies, and integrates with YORK chiller solutions and optimized controls reflects how operators are translating architectural flexibility into measurable improvements across the thermal chain.

Engineering data centers with confidence

Industry analysts have ranked Johnson Controls among the leading thermal management providers worldwide, confirming our commitment to innovating ahead of the curve and delivering solutions that meaningfully free power for computing.

Johnson Controls experience is established within hyperscale facilities, where our teams work directly with operators to solve the real thermal challenges created by rising rack densities, tighter power limits, and rapidly shifting workloads. That proximity shapes how we design: anticipating the next generation of heat loads, building for higher temperature liquid loops, and supporting the resiliency strategies operators need for mission critical AI infrastructure.

As AI workloads intensify, the operators who can demonstrate more tokens per watt will define the next era of digital infrastructure. We are committed to helping you build that advantage, starting in your data hall, where thermal performance matters most.