Tech-enabled economic growth depends on compute which depends on energy. The demand profile for computing as an asset class is one of the most energy intensive.
Compute happens in data centres. Data centres traditionally need high CAPEX, have long build timelines and are energy intensive, for instance, global data centres consume 2% of power today in a decade that will be 10%. A hyper-scale data centre consumes the same power as 80,000 houses.
The drain on the energy system to power technology infrastructure is only increasing alongside the adoption of that technology as a transformation enabler, especially in growth economies and the global south. 40% of the data centres today are in the US, equaling almost 17 GW est. to grow to 35 GW by 2030. Ref: [[Data Center Opportunities]]
The powering of that infrastructure needs to be intentional and thoughtful in terms of partnering with upstream energy producers such that we can drive down the energy impact and cost of this future digital infrastructure. The compute costs govern technology accessibility levels and the sustainability of tech business models.
This makes traditional data centre growth, depending on the energy mix used, carbon intensive and increases the overheads that impact compute costs driven by debt servicing and high cooling costs, which are ca. 40% of the data centre OPEX. This is why it matters how the data centre world architects the infrastructure needed to support its growth.
We need to bring down the compute costs and the carbon intensity of data centres, while facilitating for technology accessible within them that's meaningful for people who buy the energy capacity / compute called Off-takers.
This "arbitrage" could be created through a tailored Service Level Objective (SLO) for an Off-taker that incorporates:
- **The Shell:** modular data centres that have a lower CAPEX, faster build timelines and are easily configurable.
- **The Power:** sustainable energy (solar, wind, geothermal, bio-energy) with energy storage capacity - also called 24/7 Purchasing Power Agreements (PPAs) in the right locations.
However, the challenges right now are:
- **Sustainable energy economics** depends on site selection and local negotiations (PPAs, incentives)
- **Pooling Energy Demand:** Right now only large energy capacity could be purchased (in the high Mega-watts (MW) / Giga-watts (GW), this gives Hyper-scalers an inherent advantage to rule the market (AWS, MSFT, GCP, etc.). Investors with smaller data centers could aggregate their purchasing power to optimise energy procurement and storage. Some might also consider investing in renewable-energy plants that could supply consortiums of smaller players.
- **Location & Incentives:** The best places to build renewable assets is not necessarily where the demand is. Today the market dynamic is as follows: renewable energy producers underwrite their projects against: power sales and production tax credits. There is a weird market dynamic where renewables are being built and optimised for max utility without a link to demand or independent of it. These sites are generating consistently but rate at which they sell the power is negative, they are paying someone to get rid of it to collect the tax credits. We co-locate data centres to help them to use this stranded energy.
- **Other Energy Sources:** There are other sources of energy too that could be used in an off-grid manner.
- **modular data center players** lack on securing off-takers & co-location partners - They lack the knowledge across the stack and miss the sector specific application value in their Go To Market to attract the off-takers. This makes it challenging to construct deals with favourable terms.
- **the market is globally fragmented**, there is a large room for partnerships in projects with SPV structures - There is a real divide between the Global South and the Global North, with limited cross-pollination of technology and approaches.
- **to** **fund the projects**, hardware/ data centre set ups needs to financed and energy agreements need to be made, both of these depend on getting a commitment from Off-takers, ideally of a long-term nature. These off-takers vary from:
- **Cost-driven:** Need specific high performance hardware, software, tool and availability configurations in the SLAs
- **Criticality-driven:** Need reliability and high-uptime for foundational activities usually with governments and financial services.
- **Sustainability-driven**: Want to curtail emissions irrespective of the price or because of existing incentives that exist.
- High performance storage is needed, not with BIG Tech, at the edge powered by stable power.
**The Waste Methane / Flaring Reduction Opportunity:** Mitigating methane emissions from flaring to power data centres. Flaring is this global problem in oil fields where we have 14 billion cubic feet of gas that is being burned off and completely wasted, that's about 65 GW of power which is about 6x the power footprint of the Bitcoin network. This is meaningful amount of waste energy causing harm and not creating a positive impact. We bring a revenue stream to a wasted asset and drive down the emissions footprint and the cost for compute infrastructure that has energy as it's largest operating expense. [[Natural Gas Flaring and Impact]]
**Who would fund these projects and why?**
Global spending on data centres is expected to reach $49 billion by 2030.
- Credit firms that offer Debt / Loans and are interested in compute / data - e.g. [Upper90](https://upper90.io/) gave Crusoe Cloud a loan for H100 GPUs. Coreweave raised $2.3b in debt from Magnetar Capital, Blackstone, & DigitalBridge.
- High-net worth individuals looking to find differentiation in an application space such as AI / crypto-mining and want to get the price-arbitrage of the sustainable energy and modular data centre.
- Large corporates who want are sustainability conscious and keen on edge computing from a security standpoint. Probably also those that have waste methane to mitigate.
- Energy providers in regional areas that have credits to spend or they want to create value from their energy capacity to get buy in from investment.
- Universities looking to expand their footprint and secure dedicated access for upskilling.
**What are the applications?**
Different applications have different reliability constraints:
- Latency agnostic applications: Since we will be build data centres, where people haven't: applications that are tolerant of more latency are best, as the places we will be working in mostly need new networking.
- AI workloads, mostly inference based compute workloads as the operationalisation of the Generative AI models happens.
- Crypto-mining
**Our insights on the future:**
1. Training and Inference workloads will be real-time, with inference workloads being the majority.
2. Containerised high performance data centers will become ubiquitous.
3. Application specific chips will be in high demand such as Groq.
4. High performance storage will be an essential component of critical-driven demand.
5. Application specific value will drive sector adoption to compute and the economic progress of growth economies.