Prime Minister Anthony Albanese recently welcomed a $20 billion private sector investment into Australia’s data centre infrastructure, reflecting the sector’s critical role in supporting the nation’s digital ambitions. However, as the digital economy accelerates, so does the energy footprint of the data centres that power it. Operators now face a sharp design tension: how to deliver more compute per square metre without exceeding their carbon budgets. 

According to CBRE‘s Australia’s Data Centres 2024 report, the country is emerging as one of the fastest-growing data centre markets in the Asia-Pacific region, fueled by demand for cloud services, artificial intelligence (AI) and data storage. Meeting this demand requires rapidly scaling the nation’s infrastructure through denser data centre designs that support high-performance computing workloads within smaller physical footprints. IDC projects that installed data centre power capacity in Asia-Pacific will grow at a compound annual rate of 14.2%, reaching 94.4 gigawatts by 2028. Within Australia, data centres already account for 5% of national electricity use, with that figure expected to rise to 8% by 2030.

This rapid growth presents significant sustainability challenges, from reducing emissions to ensuring energy efficiency. The result is a growing tension at the heart of modern data centre design: how to balance densification, the need for greater computing capacity, with decarbonisation, the urgent imperative to reduce environmental impact. These priorities are no longer optional or to be solved in silos– the solution needs to be hybrid. 

Why densification demands a rethink 

To support the rapid adoption of AI, cloud services, and other high-performance computing applications, data centres are increasingly densifying computing powers within each rack. Rack densities in the 30 to 40kW range are becoming standard in new builds, with future deployments expected to exceed 130kW. This intensification is pushing legacy infrastructure to its limits, creating substantial thermal loads and exposing the limitations of traditional air-based cooling systems.

Whilst widely deployed as a solution across the industry, air-based cooling systems become inefficient beyond 30-50kW per rack. As density increases, these limitations escalate operating costs and hinder scalability, particularly in Australian regions with high ambient temperatures and grid constraints. To maintain performance and efficiency, facilities must adopt more advanced cooling technologies capable of managing higher heat loads within limited spatial and energy budgets.

Concurrently, water efficiency is becoming a critical factor in sustainable data centre design. Many of the region’s fastest-growing data centre markets, including Australia, are already experiencing water scarcity, adding another layer of pressure on operators to adopt cooling systems that are both energy and water-efficient.

Decarbonisation pressures are escalating

Similarly, environmental regulations and investor expectations are intensifying. The Australian Government’s whole-of-government ICT energy performance targets are reshaping procurement standards in the public sector, with similar expectations increasingly mirrored in private tenders. Decarbonisation efforts now extend beyond Scope 1 and 2 emissions to include Scope 3, encompassing supply chain sustainability, embodied carbon, and infrastructure-level resource use. In short, climate accountability is expanding – and fast.

Where densification meets decarbonisation

The convergence of environmental and performance demands is driving rapid interest in liquid cooling systems. These technologies are up to 3,000 times more thermally efficient than air and offer a scalable path forward for higher-density compute. According to Omdia, the global data centre cooling market is projected to reach US $16.87 billion in 2028, with increased adoption in liquid cooling and hybrid air/liquid systems. According to Verified Market Research, the Australian liquid cooling market was valued at USD 312.42 million in 2024 and is projected to reach USD 732.41 million by 2032, growing at a CAGR of 11.2%, fuelled by AI, HPC, and advanced cloud services.

Industry attention has turned toward direct-to-chip liquid cooling as a practical and high-performance solution. These systems enable targeted heat removal at the source, making them ideally suited to AI and HPC workloads that generate intense thermal loads. Direct-to-chip cooling offers an efficient way to support increasing rack densities, while optimising floor space and enhancing overall energy performance. It complements traditional air-based methods by addressing the needs of higher-performance computing environments. As these systems can often be retrofitted into existing data centre environments, they offer a realistic path forward for operators looking to modernise without a full infrastructure overhaul. Vertiv is working closely with global partners, including NVIDIA, to scale this approach and deliver the power and thermal architectures needed for next-generation AI deployments.

Designing for dual priorities

Crucially, success lies not in prioritising performance or sustainability in isolation, but in designing for both. The trade-off mindset is obsolete. Operators that fail to embed environmental efficiency into high-density builds face rising operating costs, reputational scrutiny, and compliance challenges. Conversely, those who embrace advanced thermal strategies, such as direct-to-chip cooling, modular builds, and intelligent power systems, will be well-positioned for long-term resilience and market leadership.

Beyond direct-to-chip cooling, a suite of complementary innovations is reshaping how the sector approaches thermal and energy optimisation. These include chilled water systems that use low-GWP refrigerants to reduce direct carbon emissions while enhancing efficiency. For instance, systems like Vertiv CoolLoop Chillers can significantly reduce direct and indirect CO2 emissions and lower annual energy consumption by 20% through their low GWP and inverter technology. Prefabricated modular infrastructure has also been proven to enable up to 30% faster deployment timelines compared to traditional builds, while improving energy performance through tighter system integration. Combined with AI-driven energy optimisation, these technologies are enabling operators to reduce emissions per unit of compute, without compromising operational efficiency or uptime.

Powering Australia’s digital future

The decisions made today will define Australia’s data economy for decades. Trade-offs between performance and sustainability are no longer acceptable. The next generation of data centres must be designed to deliver high-density compute while meeting increasingly stringent environmental expectations. Achieving this requires proactive collaboration across infrastructure design, energy strategy, and operational execution.

The technology and expertise to address the dual imperatives of densification and decarbonisation are already available. The challenge now is one of strategic implementation. Operators that integrate advanced cooling, modular infrastructure, and emissions-conscious design from the outset will be better positioned to meet regulatory requirements, customer expectations, and investor scrutiny.

As AI transforms every sector, Australia’s infrastructure must do more than keep up – it must lead. The facilities that solve both density and decarbonisation will not just survive – they will set the standard.