# Neural Network Compression of Simulations Training a neural network to approximate a physics simulation's output. The simulation runs offline to generate training data. The neural network learns the input-output mapping. Then you deploy the neural network instead of the simulation. Speed gain: 1000-10,000x. A computational fluid dynamics simulation that takes hours compresses into a neural network that runs in milliseconds. This makes real-time optimization feasible. The fidelity trade-off: compression loses information. The surrogate is accurate within its training domain but can give nonsense outputs for conditions it hasn't seen. Industrial processes occasionally hit unusual operating conditions, which is exactly when you need the model most. Tools: NVIDIA Modulus (physics-informed neural networks), Siemens Simcenter HEEDS, open-source frameworks. This is rapidly becoming commodity infrastructure. Not a differentiator on its own. The real question: how do you generate the training data efficiently? Naive approaches require thousands of simulation runs. Adaptive sampling (choosing which new simulations to run based on where the surrogate is least accurate) dramatically reduces this cost. This is where [[Surrogate Models]] research like Bayesian optimization and Delaunay triangulation matter. Related: [[Surrogate Models]], [[Digital Twins]], [[Simulation-Based Optimization]], [[Industrial AI MOC]] --- Tags: #deeptech