Contrary to popular belief, data centers aren’t maxed out around the clock. As energy researcher Tyler H. Norris explains on the *Power & Policy* blog, terms like load factor (average vs. peak load) often get conflated with capacity utilization, making data center demand seem flatter and closer to 100% than it really is. In practice, most facilities operate well below their rated maximum — thanks to redundancy, maintenance, and fluctuating workloads. This matters: overestimating utilization leads to overbuilt infrastructure and missed opportunities for smarter demand management.
## 1. Turning Idle Capacity into Value
Navon is designing high-density, modular facilities for advanced AI and HPC workloads. But unlike traditional operators, we don’t accept idle capacity as inevitable. By colocating flexible compute loads (like Bitcoin mining) alongside mission-critical AI servers, spare megawatts can be filled dynamically. This drives **Infrastructure Usage Effectiveness (IUE)** closer to 100% — keeping expensive electrical and cooling systems productive instead of underutilized.
![[Pasted image 20250820095016.png]]
## 2. Balancing Spiky AI/HPC Workloads
AI and HPC workloads are inherently lumpy. Training jobs spike, then pause for checkpoints; inference demand swings with user traffic. A recent NVIDIA-backed trial by *Emerald AI* showed GPU clusters could flex power draw by **25%** during grid stress events without degrading service — proof that AI factories already hold hidden flexibility. Navon builds on this insight: flexible compute absorbs slack when AI demand dips, and instantly throttles back when clusters surge. The outcome is a steadier profile for the grid and higher utilization for the facility.
![[Pasted image 20250820095114.png]]
## 3. Monetizing Flexibility and Supporting the Grid
Because interruptible compute can be powered up or down in sync with market signals, Navon can hash more when renewables are abundant or power is cheap, and curtail instantly when scarcity drives prices higher or grid operators call for relief. This not only extracts more value per megawatt but also positions Navon as a **grid-supportive partner** — a role conventional colocation providers struggle to play.
## So What?
The “flat 90% load factor” assumption is a myth. As Norris argues, utilization is much lower — and that’s exactly where opportunity lies. Navon’s hybrid model turns underuse into an asset: pairing Tier III–like reliability for AI clients with flexible compute that monetizes slack capacity and supports grid stability. The future isn’t about running data centers at max all the time. It’s about running them smart — with the right mix of steady and flexible loads.
---
## Sources
- Tyler H. Norris, *Power & Policy*: [[Puzzle of low data center utilisation]]
- *Emerald AI* (NVIDIA-backed): ["AI Factories Provide Demand Flexibility to the Grid"](https://blogs.nvidia.com/blog/ai-factories-flexible-power-use/)