# Human-in-the-Loop Systems
In industrial AI, recommendations go to a human operator who decides whether to act. The system suggests; the operator executes. This is both a safety requirement and a commercial trap.
The safety case: a bad recommendation in a chemical plant can cause explosions, environmental releases, or equipment damage worth millions. Human oversight is non-negotiable for most industrial processes.
The attribution trap: when the system recommends and the operator sometimes overrides, you can never cleanly attribute outcomes to the AI. Did the 5% yield improvement come from the AI's recommendations, or from the fact that operators were paying closer attention because a new system was watching? The Hawthorne effect is real and unmeasurable.
The pricing trap: if the operator can override everything, the system is advisory. Advisory systems are hard to price at premium levels because the customer feels they're still doing the work. Moving from advisory to autonomous is where the real value unlock happens, but it requires regulatory and organizational trust that takes years to build.
The evidence test: when evaluating claims of "X% improvement" from any industrial AI system, always ask for controlled baselines or counterfactual methodology. Before-after comparisons without controls are weak evidence. Running the optimizer vs. a simple heuristic on historical data is the minimum credible test.
Related: [[Reinforcement Learning for Process Control]], [[Industrial MLOps]], [[Industrial AI MOC]]
---
Tags: #deeptech #firstprinciple