# Analog-Aware Training
Parent: [[Analog In-Memory Computing]]
Training a neural network so that it produces correct outputs when deployed on noisy, drifty, low-precision analog hardware — not just on idealised digital silicon. The method is conceptually simple: inject the expected hardware noise into the forward pass during training, so the model learns weights that are robust to it. The execution is where the claims live or die.
Three things make it work. First, a realistic noise model. The noise distribution must reflect the actual memory technology — PCM drift is not the same as ReRAM read noise. A Gaussian noise model used as a proxy will produce optimistic results. Second, quantisation-in-the-loop. The weights, activations, and accumulators must all be quantised to match the analog array's effective precision, not just the weights. Third, temperature and retention modelling. If the deployed model will be expected to hold accuracy for years, the training must simulate drift over time.
When done well, analog-aware training can close most of the gap between idealised digital accuracy and noisy analog accuracy. A 1-2 percentage point drop is realistic on standard benchmarks. When done badly — noise model too generous, drift ignored, only weights quantised — you get a paper result that collapses in deployment.
The honest test is not benchmark accuracy at t=0. It is benchmark accuracy measured on real silicon, at temperature, after the realistic retention time for the target application.
## Related
- [[Crossbar Arrays]]
- [[Conductance Noise and Drift]]
---
Tags: #hardware #analog #training #kp