# Surrogate Models A surrogate model is a lightweight approximation of an expensive simulation. You train a neural network (or Gaussian process, or radial basis function) on the outputs of a physics simulation, then use the surrogate for real-time optimization instead of running the full simulation every time. Think of it as compression. The physics simulation is the lossless file. The surrogate is the JPEG. You lose some fidelity but gain 1000x speed. Why they matter for [[Industrial AI MOC]]: real-time process optimization needs millisecond response times. Physics simulations take minutes or hours. Surrogates bridge that gap. The scaling problem: most published surrogate methods work for 5-15 input variables. Industrial processes have 28+. Delaunay triangulation (a common adaptive sampling technique) becomes mathematically impractical above ~12 dimensions. Production systems typically use deep ensembles or Gaussian processes instead. The real question when evaluating surrogate model claims: what method are you actually running in production, and at what dimensionality? Related: [[Digital Twins]], [[Neural Network Compression of Simulations]], [[Simulation-Based Optimization]] --- Tags: #deeptech #firstprinciple