# The OT Intelligence Layer
Every industrial facility runs on operational technology that generates massive data and uses almost none of it. SCADA systems, DCS controllers, historians, CMMS platforms: they collect terabytes per day across tens of thousands of sensors. The data sits in silos, formatted differently per vendor, disconnected from engineering context. This is the single biggest unlock in [[Maintenance CapEx]]: turning trapped operational data into predictive intelligence.
The structural opportunity: hardware controllers in process industries are black boxes by design. They were built for control, not for learning. The companies that build the translation layer between these closed systems and modern AI will own the intelligence layer for physical infrastructure. Think of it as the Palantir pattern from [[Knowledge Graphs for Industrial Data]] applied to energy, chemicals, and heavy industry.
# Why physics matters here
Generic ML breaks in process environments. A temperature anomaly in an LNG train means something completely different depending on ambient conditions, throughput rate, and equipment age. [[Physics Informed Neural Operators]] provide the structural backbone: neural networks constrained by thermodynamic and fluid dynamic equations. This reduces false alarms dramatically because the model understands why a reading is abnormal, not just that it deviates from a statistical baseline.
This connects directly to [[Digital Twins]]. The hybrid approach (physics structure with data-driven calibration) is the architecture that works. Pure physics models are too expensive to build per-asset. Pure data models are too fragile when conditions shift. The winner combines both, and the question is [[Deployment Velocity]]: can you stand up a calibrated, physics-informed model for a new asset in days rather than months?
# The economics
[[Predictive Maintenance in O&G]] puts the numbers in context: 30-40% of O&G budgets go to maintenance. A single unplanned shutdown on an LNG train costs $5-10M per day. Detecting a bearing anomaly 18 days early prevents $700K in production loss. The ROI case for predictive intelligence is asymmetric: the cost of the software is a rounding error compared to one avoided event.
This is a [[Bottleneck Business]]. The bottleneck is not compute or algorithms. It's contextualizing raw OT data at the asset level, mapping sensor relationships to physical processes, and building the domain-specific baselines that make predictions reliable. See [[Bespoke Engineering in Industrial AI]]: the moat is in the data and domain encoding, not the model architecture.
# Where this sits in the stack
The [[AI Verification]] framework applies here with teeth. In process safety, wrong predictions kill people. This is the top-right quadrant from [[Where Domain Evals Matter Most]]: high penalty for failure, high verification opacity. Only experienced plant operators can validate whether a predictive model's alert is genuine or a false positive from sensor drift. The eval library for industrial predictive intelligence is written by process engineers, not ML teams.
[[Autonomous Agents]] enter the picture at the workflow layer. Once you have reliable predictions, the next step is automated work order generation, parts procurement triggers, and maintenance scheduling optimization. The agent layer sits on top of the intelligence layer. But the intelligence layer has to be trusted first. Automation without trust is just faster mistakes.
# The GCC timing catalyst
Qatar expanding LNG capacity from 77 MTPA to 142 MTPA by 2030 is a greenfield wave. New mega-trains transitioning from construction to operations need predictive systems from day one. This is the ideal entry point: embed during commissioning when there's no legacy system to displace. The same pattern applies across GCC energy expansion, Saudi Aramco's downstream diversification, and Abu Dhabi's industrial growth.
This maps to the broader [[Maintenance CapEx]] thesis: the Middle East faces 2-3x accelerated asset degradation from heat, sand, and humidity. Maintenance is non-discretionary. The question is whether operators maintain reactively (expensive, dangerous) or predictively (cheaper, safer, data-rich).
## Three make-or-break questions for any company in this space
**1. Can you prove deployment velocity at scale?**
The difference between a platform and a consultancy. [[Deployment Velocity]] is the metric. If customer #1 takes 6 months and customer #5 still takes 5 months, there's no learning curve and no scalable business. The <30 day claim needs to be validated: can an edge gateway and asset template library genuinely produce live predictive insights that fast, or does each facility require bespoke data engineering that resets the clock? [[Auto-Generated Physics Models]] remain the holy grail here, but nobody has demonstrated this at production scale.
**2. Do you own the data flywheel or rent it?**
Every deployment should make the next one better. Facility-specific baselines, failure mode libraries, cross-asset pattern recognition. If the models are trained per-customer with no knowledge transfer between sites, each deployment is isolated and the company scales linearly. The defensible play is a shared representation layer: anonymized operational patterns that compound across deployments while keeping customer data segregated. This is the [[Wright's Law]] test applied to industrial AI: does cumulative deployment experience reduce cost-to-deploy and improve accuracy?
**3. Who validates the predictions?**
[[Domain Experts as Eval Builders]] is the crux. A predictive alert that a compressor will fail in 14 days is only valuable if the operations team trusts it enough to act. Trust comes from demonstrated accuracy, which comes from domain-specific validation, which requires process engineers who understand the specific equipment and operating context. Companies that build a feedback loop between operator expertise and model calibration will compound trust. Companies that ship generic anomaly scores will drown in false positives and lose credibility with the operators who matter. See [[False Alarm Problem in F&G]] for the precedent.
---
Links:
- [[Maintenance CapEx]]
- [[Predictive Maintenance in O&G]]
- [[Knowledge Graphs for Industrial Data]]
- [[Digital Twins]]
- [[Physics Informed Neural Operators]]
- [[Auto-Generated Physics Models]]
- [[Deployment Velocity]]
- [[AI Verification]]
- [[Domain Experts as Eval Builders]]
- [[Where Domain Evals Matter Most]]
- [[Bottleneck Business]]
- [[Bespoke Engineering in Industrial AI]]
- [[Autonomous Agents]]
- [[Wright's Law]]
- [[False Alarm Problem in F&G]]
- [[Convenience-Control Tradeoff]]
- [[Fire and Gas Detection MOC]]
- [[F&G Safety Opportunity MOC]]
- [[First Principles and Mental Models MoC]]
---
Tags: #deeptech #kp #firstprinciple #investing