Data Center Water Usage: Cooling Basics, Real-World Comparisons, and AI Training vs Inference

Data centers use water, but at national scale the direct footprint is small and controllable. Lawrence Berkeley National Laboratory estimates U.S. data centers directly used about 17.5 billion gallons of water in 2023. The United States withdraws roughly 322 billion gallons per day across agriculture, power generation, industry, and cities. Cooling choices drive most on-site water. Evaporative systems use water to save electricity. Chillers save on-site water but use more electricity. A single ChatGPT query uses about 0.000085 gallons of water.

U.S. water use in context

America withdraws about 322 billion gallons of water per day. Irrigation and thermoelectric power dominate that flow. Direct data center water use is a small fraction of the total and depends on climate and cooling design. U.S. data center water use at roughly 0.015% of total U.S. daily withdrawals on a nationwide basis

How much water does a ChatGPT query use

OpenAI reports that the average ChatGPT query uses about 0.000085 gallons of water and about 0.34 Wh of electricity. That is roughly 11,765 queries per gallon. Readers often want to see how this compares to familiar activities. The table below converts common items to gallons and then to equivalent ChatGPT queries.

Table 1. Water used for using a model (inference) and approachable comparisons

Inference means running a trained model to answer a user request, such as returning a ChatGPT response. Assumption for query conversions: 1 ChatGPT query ≈ 0.000085 gallons of water.

ItemWater use (gallons)Equivalent ChatGPT queries
1 ChatGPT query0.0000851
Videoconference or streaming, 1 hour0.53 – 3.176,212 – 37,294
Brushing teeth, tap left running4.0047,059
Brushing teeth, tap mostly off0.333,882
1lb Beef Hamburger660.007,764,706
Automobile manufacturing — per vehicle, low660.43 – 747.617,769,765 – 8,795,412

Training AI models versus familiar water benchmarks

Training a state-of-the-art AI model means running huge clusters of specialized chips for weeks to create a system that can generate text, images, or code on demand, and this process is both energy and water intensive. For a single training run of Grok 4, independent research group Epoch AI estimates that data center cooling alone used about 750 million liters of water, or roughly 198 million gallons, on top of tens of millions of dollars of electricity. And because companies often retrain or fine-tune models, these are not one-time costs. To anchor the scale of that water use, the table below compares Grok 4’s training run to filling 100 Olympic-size pools and to a full year of irrigation for a square mile of farmland.

Visual comparison aligned with the gallon conversions in Table 1. Grok 4 water is a third-party estimate from Epoch AI.

Why data centers use water

As servers operate they generate heat that must be dissipated. Air and water can both remove heat, but water transfers heat far more efficiently, which is why many new AI and GPU facilities adopt liquid cooling. Evaporating a small amount of water can reject a large amount of heat from the facility loop.

Cooling 101: where water is used and whether it is lost

As servers get more powerful they create more heat. Moving that heat out of the room efficiently is the job of the cooling system. Two common approaches are rear-door heat exchangers and direct-to-chip liquid cooling. Here is the plain-English version.

Rear-door heat exchangers

Think of a metal radiator mounted on the back of each rack. Hot air coming out of the servers passes through this “rear door.” Inside the door are small tubes filled with cool water from the facility. The hot air warms the water, so the air going back into the room is much cooler. The warmed water returns to the building plant, where it is cooled again and sent back to the doors. The servers still use air internally, but the heat transfer at the rack is to water, which carries heat much better than air.

Why this helps: it keeps the room comfortable without blasting the whole hall with air conditioning. It also works in existing rooms because you can add doors to standard racks.

Rear-door heat exchanger path: Air-to-liquid rear-door heat exchanger tied to a coolant distribution unit and a facility loop. Source: Vertiv educational article.

Direct-to-chip cold plates

Now imagine a car engine with a thin metal plate that carries coolant right on top of the hottest parts. That is the idea here. Small, sealed plates sit on top of CPUs and GPUs. Coolant flows through channels inside each plate and picks up heat directly from the chips. The warmed coolant goes to a nearby box called a coolant distribution unit. There it gives up its heat to the building’s water through a heat exchanger. The server fans do less work because the chips are cooled at the source.

Why this helps: it handles very high-power processors that are hard to cool with air alone. It also lets the building run warmer water, which saves energy.

Direct-to-chip cold plates and CDU: Direct-to-chip cold plates move heat to a coolant distribution unit and then to facility water via a liquid-to-liquid heat exchanger. Source: Vertiv technical explainer.

Putting it together

  • Rear-door heat exchangers keep the rack exhaust cool by handing heat to facility water at the back of the rack.
  • Direct-to-chip cold plates pull heat off the processors themselves and hand it to facility water through a nearby exchanger.
  • Inside the building the water is recycled. Outside the building a tower may evaporate a small fraction to dump heat to the sky. Make-up water replaces only that small fraction.

How data center facilities reject heat

Servers move heat into a building water loop. The plant’s job is to push that heat outside. There are two main ways to do it, and many sites use a mix of both.

Evaporative towers with waterside economizers

Picture a big outdoor box that acts like a swamp cooler for the whole building. Warm water from inside is sprayed over plastic fill while fans pull air through. A small portion of the water evaporates. That phase change removes heat very efficiently, so the returning water is cooler.

  • Rule of thumb. About 1.8 gallons per ton-hour of cooling is lost to evaporation.
  • Small extra losses. A tiny mist, called drift, can escape with the exhaust air. Modern gear keeps drift around 0.05 to 0.2 percent of circulating flow, and advanced eliminators can push it below 0.005 percent.
  • Waterside economizer. In cool weather you can skip the compressors and use a plate heat exchanger so tower-cooled water chills the building loop directly. This is often called free cooling because it uses very little electricity.

Why people like this option: it is very energy efficient, which can lower costs and often total environmental impact. The tradeoff is the need for make-up water to replace evaporation, drift, and controlled blowdown for water quality.

Chillers

A chiller is a giant refrigerator. It uses compressors to move heat from the building water into the outside air or a separate condenser loop.

  • Water impact. Chillers can run with little ongoing on-site water use.
  • Energy impact. Compressors draw more electricity, especially in hot weather. Whether this is the better choice depends on the local climate and how water intensive the grid is.

Why people choose this option: it reduces on-site water use and offers tight temperature control, at the expense of higher power.

What is the best heat rejection technique for your data center?

There is no single best answer. In cool or dry climates, towers with economizers often win on energy. In very hot or water stressed locations, chillers or hybrid sequences can make more sense.

  • Many campuses blend both and switch modes with the seasons.
  • Towers use a little water to save a lot of electricity.
  • Chillers save on-site water but spend more electricity.

Case study: Frontier at Oak Ridge

Oak Ridge National Laboratory’s Frontier (the second fastest super computer in the world) uses warm-water liquid cooling for most of the compute load. Cooling towers and plate heat exchangers provide economizer hours across seasons. Chillers cover the rest. The images below show the mechanical layout and heat-rejection paths inside the building and on the roof.

ORNL diagram highlighting chilled-water plants, cooling plants, and compute halls.

Managing and minimizing water use

Data centers do use meaningful amounts of water at a facility scale, which will exceed what most individuals run through their taps in a day. In the national picture, however, their direct water use is small compared with agriculture and several heavy industries. The bigger drivers of personal water footprints are the foods we eat and the goods we buy. Skipping meat just one day a week reduces an individual’s water use by far more than avoiding ChatGPT or other AI tools altogether. Choosing efficient cooling designs and operating with care keeps data center water use low while delivering real computing value. The practical takeaway is to keep perspective, improve operations where it matters, and focus personal choices where they move the needle most.

References

Scroll to Top