Efficient cooling is a fundamental need for data centers and HPC containers because of the immense heat generated by the GPUs and servers.
**To improve the efficiency** of cooling systems and reduce energy consumption in data centers, several technologies are now under development.
**_Immersion cooling._** New approaches include **full-immersion and direct-to-chip/cold-plate cooling**. In the former, IT equipment (such as a server) is immersed entirely in a **nonconductive and nonflammable di-electric liquid** that acts as a coolant and dissipates heat generated by the equipment. In the latter (and more targeted) approach, a metal plate (or heat sink) is used for high-thermal-emission components (such as chips) in the servers. This approach transfers the heat and then cools it using a liquid coolant. By maintaining consistent, uniform temperatures, full-immersion cooling can cope with higher power densities (upward of 100 kilowatts) and raise the average performance of central processing units by as much as 40 percent. Riot Platforms uses immersion technology at a Bitcoin-mining farm in its Whinstone facility in Texas, and hyperscalers are developing and testing it.
**Artificial intelligence and machine learning** Hyperscalers such as Google have used artificial intelligence/machine learning (AI/ML) algorithms to focus cooling where it is most needed, depending on factors such as workload intensity and changing power loads across racks. Early adopters have reported 20 to 30 percent reductions in [[Power Usage Effectiveness - PUE]]. AI/ML applications have also balanced the load on uninterruptible-power-supply units by changing power routes to servers throughout the day to optimize cooling and save energy.
**_Waste-heat applications._** To reduce a data center’s carbon footprint, these applications use heat from data centers for other purposes, such as district heating. Amazon uses recycled heat from a data center in Ireland to supply district heat in Dublin, for example, and Facebook says that the heat from its Danish data center is warming 6,900 homes.
- **Dry cooler** for mining solutions (which costs $100K to $150K USD per 1MW).
- **More complex HPC solutions**, which use non-redundant or redundant cooling systems based on the server density and usage. Non-redundant cooling is cheaper but risky, while redundant cooling ensures uptime but at a higher cost.
- Budget of around **$250K for 4 racks** in a 40ft container with a cooling system that can handle **50kW per rack** (non-redundant cooling).