# Modular Data Centers MOC
## Cooling Economics
- [[DX vs Chilled Water Crossover]]
- [[Condenser Cable Economics]]
- [[PUE at Altitude — ASHRAE Derate]]
## High-Density Compute
- [[Density Tiering Principle]]
- [[Delta AI CDC Container]]
- [[Rear Door Heat Exchangers]]
## GPU Vendor Dynamics
- [[Nvidia Gatekeeper Model]]
- [[Tenstorrent — Alternative Silicon Path]]
## Power Architecture
- [[48V DC and OCP Bus Bar]]
- [[PV as IT Load Headroom]]
- [[UPS Autonomy Philosophy — Generator Bridge]]
## Factory & Deployment
- [[Factory Pre-Testing and CFD Validation]]
- [[Transport Constraints as Design Inputs]]
- [[FK-5-1-12 Fire Suppression]]
## Monitoring
- [[Delta DCIM — U-Level Granularity]]
---
# Directional Arrows of Progress
**Cooling Architecture Selection**
`Air → DX (dedicated condensers) → Chilled Water → Hybrid`
DX wins below ~1MW on cost. The crossover isn't about performance — it's about economics. Long condenser cable runs ($2K each × 24 cables) still cost far less than the €200-300K chilled water upgrade.
**Rack Density Separation**
`One-size-fits-all → Tiered envelopes (colo / inference / training)`
Don't try to cool 160kW racks in the same thermal envelope as 20kW colo racks. Separate them physically — different containers, different cooling, different operational models. The Delta approach: standard MDC for ≤20kW, AI CDC bolt-on for 30-40kW, and a separate conversation entirely for GB300 at 160kW.
**GPU Access Model**
`Open hardware → Certified installers → Gatekeeper lock-in`
Nvidia's B200/B300/GB300 requires authorized installation partners (Asus, Super Micro, Dell), 6-month training, blanket refusal otherwise. This is hardening, not softening. Alternative silicon (Tenstorrent at ~12.5kW/chip, no gatekeeper) becomes strategically important even at lower performance.
**Power Distribution**
`AC PSU per server → Rack-level 48V DC → OCP bus bar (eliminate CRPs)`
The OCP bus bar architecture converts 48V→12V at the rack, removing individual server power supplies. Fewer components, fewer failure points, better efficiency.
**Monitoring Resolution**
`Rack-level → Per-server IPMI → U-level DCIM`
Delta's DCIM platform goes to individual U position with IPMI via ASRock AST2600. Replaces Zabbix-style polling with purpose-built DC infrastructure management.
---
# Key Principles
1. **Cooling is the cost lever below 1MW** — not compute, not power. The DX vs chilled water decision cascades into cable routing, condenser placement, redundancy topology, and total cost.
2. **Transport constraints are design inputs, not afterthoughts** — 3.5m width, 20m length max without escort, ~1 month sea freight. These shape the physical architecture from day one.
3. **Factory beats field** — pre-fabrication with full thermal simulation (CFD) and on-site condition replication before shipping. 6 months build + 1 month ship vs 12-18 months traditional.
4. **Solar is engineering headroom, not a sustainability checkbox** — PV offsets 2kW/rack, creating 22kW usable IT load from a 20kW grid allocation. It's a power budget input.
5. **Minimize stored energy, maximize mechanical generation** — 5-minute UPS autonomy bridges to generators. Lead-acid, 10-year cycle, 2N redundant. Extended battery autonomy is expensive and unnecessary when generator start is reliable.
6. **Vendor lock-in is structural** — Nvidia's gatekeeper model means your choice of GPU dictates your choice of installer, timeline, and cost. Plan around it or pick different silicon.
### Related
- [[Data Center MoC]]
- [[Navon MoC]]
- [[Modular Data Center Design Principles]]
- [[Open Source Hyperscaler MoC]]