บทความทั้งหมด
การลงทุนอย่างยั่งยืน

Thirsty servers, hungry investors: how sustainable is AI?

As AI reshapes the world, its hidden thirst for water and soaring infrastructure costs raises urgent questions about the sustainability of the digital revolution.

Author
Sustainable Investment Manager
Robot image graphic

ระยะเวลา: 5 นาที

วันที่: 04 พ.ย. 2568

The rapid rise of artificial intelligence (AI) is reshaping industries, economies, and investment strategies. But beneath the surface of this technological revolution lies a complex web of environmental and financial risks – particularly around water and energy consumption. For investors, understanding these dynamics is critical to navigating both the opportunities and the vulnerabilities emerging from AI’s infrastructure demands and business models.

The overlooked thirst of AI

While the energy intensity of AI has received widespread attention, its water footprint remains underappreciated. Data centres – the backbone of AI – consume vast amounts of water, both directly and indirectly. Direct use stems from cooling systems, particularly evaporative cooling, which loses up to 80% of the water used. Indirect use arises from power generation and Graphics Processing Unit (GPU), or chip, manufacturing – both of which are water-intensive processes.

In 2024, data centres directly consumed 95 billion litres of water worldwide. While this is dwarfed by agricultural irrigation, the projected compound annual growth rate of 80% means data centre water use could reach over one trillion litres by 2028 – that’s enough to fill 400,000 Olympic-sized swimming pools. Critically many data centres are in regions of medium-to-high water stress, which amplifies localised environmental and operational risks. Water Usage Effectiveness (WUE) is a metric that helps to measure the water efficiency of data centres. It can be particularly useful to compare efficiency across different locations and cooling technologies.

Energy-water trade-offs and cooling constraints

Cooling technologies present a trade-off between energy and water efficiency. Evaporative cooling is energy-efficient but water-intensive, while air-cooled systems consume more energy but less water. Innovations such as dry coolers, seawater cooling, and reusing waste heat are emerging, but they are highly location-dependent and often come with higher capital expenditure (capex) and operational complexity.

Data centres are essentially racks of servers. Server-level cooling is evolving as AI workloads (specifically the GPU chips required) push the energy consumed within these racks (rack-power densities) beyond 100 kilowatts. As the racks consume more power, they create more heat, which means that traditional air cooling becomes insufficient. Liquid cooling – starting with direct-to-chip systems and then immersion cooling for the most advanced GPUs coming to market in the next couple of years – is likely to become essential. However, these systems introduce new risks, including higher capex costs, complex fluid maintenance, possible cooling failures, regulatory scrutiny [1], and execution challenges.

The ‘fremium’ model: monetisation versus infrastructure costs

AI’s dominant consumer business model is known as ‘freemium’. It’s a business strategy where a company offers a basic version of a product or service for free, and charges for premium features, usage, or access. It poses a unique financial challenge for companies, though. While platforms like ChatGPT boast hundreds of millions of users, only a fraction are paying customers. This creates a disconnect between user growth and monetised demand. And it raises questions about the sustainability of the massive infrastructure investments that are required for AI, particularly as newer and more expensive cooling technologies will be required to keep advancing AI systems.

AI’s capex commitments are booming, with new announcements coming every day from the likes of OpenAI, NVIDIA, Oracle, SoftBank and others. Yet, the monetisation of these platforms remains uncertain, prompting some investors to draw comparisons with the dot-com bubble of the early 2000s. A further complication is the emergence of a ‘shadow AI economy’, where employees opt to use free consumer AI tools they find effective, rather than the enterprise-grade solutions their companies pay for. This behaviour undermines enterprise adoption and revenue growth, potentially slowing the capital spending cycle and making it harder for providers to justify continued infrastructure investment.

Capex intensity and financial strain

The scale of AI infrastructure investment is staggering. Bain & Co estimates that meeting global computer demand will require $500 billion annually in capex. Even if firms shift all technology spending to the cloud and cut sales, marketing, and research and development (R&D) budgets by 20%, there is still a shortfall of $800 billion in revenue by 2030 that’s needed to underpin AI infrastructure investments.

This financial strain is potentially surfacing in accounting practices. Hyperscalers (large, cloud service providers) are extending the assumed useful life of server assets in their financial filings – an approach that can make profitability appear stronger by spreading costs over a longer period. Amazon has recently bucked this trend, in its latest financial statement, it reversed its previous decision to extend server lifespans and instead shortened the depreciation timeline, explicitly citing AI-investments as the reason. The shift potentially signals that AI infrastructure is more capital-intensive than previously assumed, and that earlier lifespan extensions may have understated the true cost. Notably, other hyperscalers followed Amazon’s earlier lead in extending server lifespans, raising questions about whether current assumptions accurately reflect the pace of hardware turnover in the AI era. 

Strategic implications and opportunities

Despite the risks, AI infrastructure growth presents opportunities for investors in adjacent sectors – not just in large technology companies but also in supply chains. Clean technology firms, energy providers, and component suppliers stand to benefit from rising electricity demand and hardware requirements. 

However, the competitive landscape is shifting rapidly. Amazon Web Services’ decision to develop its own in-house cooling solution for NVIDIA’s Blackwell GPUs – rather than relying on traditional external providers – underscores the fast pace of innovation in the sector. This move highlights how major players are increasingly prioritising bespoke infrastructure to optimise performance, which may disrupt established supply chains and challenge conventional providers to adapt quickly. Governments may also play a role, treating AI as strategic infrastructure and offering support that overrides short-term economics. This could create tailwinds for firms aligned with national priorities.

Final thoughts…

Investors should remain vigilant for signs of stress in the AI ecosystem. AI’s transformative potential is undeniable, but its infrastructure demands – particularly around water and energy – require a holistic risk lens. Investors must integrate environmental, technological, and financial indicators into their due diligence process and portfolio construction. The convergence of water stress, energy intensity, and monetisation challenges creates a complex landscape. But with careful analysis and proactive engagement, investors can identify resilient players, and capture long-term value in the AI-driven future.

  1. PFAS (per- and poly-fluoroalkyl substances), for example. These are also known as ‘forever chemicals’, which are a group of synthetic chemical compounds that don’t break down in the environment. They are known to cause environmental and health issues.

ขั้นตอนถัดไป

ความสามารถเด่น

เรามอบความเชี่ยวชาญด้านการลงทุนในทุกประเภทสินทรัพย์หลัก ทุกภูมิภาค และทุกตลาดให้กับลูกค้าของเรา เพื่อให้พวกเขาสามารถคว้าโอกาสในการลงทุนไม่ว่าจากที่ใด