# Analog In-Memory Computing
Parent: [[Model Compression & Edge AI MOC]]
The most radical departure from von Neumann computing in the AI stack. The core insight is that matrix multiplication — the dominant operation in neural networks — can be performed physically inside a memory array. Encode weights as conductances, apply voltages representing inputs, and Ohm's Law combined with Kirchhoff's Current Law performs the multiply-accumulate for you. No data movement between memory and compute. Orders of magnitude lower energy per operation than a digital GPU, in principle.
The tradeoff is noise. Analog is always noisy, and neural networks have to be trained or adapted to tolerate that noise. The claims usually live or die on how honest the noise model is.
## Key Concepts
- [[Crossbar Arrays]] — the physical substrate for in-memory MAC
- [[Ohm's Law and Kirchhoff's Law as MAC operations]]
- [[Conductance Noise and Drift]] — the analog error model over time and temperature
- [[Analog-Aware Training]] — baking noise tolerance into the model
- [[ADC/DAC Overhead]] — the often-ignored edge cost of analog compute
- Memory technologies: [[Phase-Change Memory (PCM)]], [[ReRAM]], [[MRAM]], [[Flash-based analog]]
- [[von Neumann Bottleneck]] — the problem analog compute is trying to dissolve
## Key Questions
- What memory technology is being used? Each has a different noise, retention, and write-endurance profile.
- What is the effective bit-precision at the array level, and how does it degrade over time?
- How is the model trained to be tolerant of analog noise — on-chip or with a simulated noise model?
- What is the end-to-end energy per inference, including ADC/DAC and digital periphery?
- Which workloads actually map well to crossbar MAC? Convolutions and dense GEMMs, yes. Attention and dynamic routing, harder.
- How are weights updated in deployment? Many analog substrates have limited write endurance.
- What happens at temperature extremes or after years of drift?
## Reading
- Sebastian et al., "Memory devices and applications for in-memory computing" (Nature Nanotechnology, 2020)
- Burr et al., IBM research on PCM-based DNN accelerators
- Mythic AI technical papers on analog matrix multiply
- Joshi et al., "Accurate deep neural network inference using computational phase-change memory" (Nature Communications, 2020)
- Any recent IEEE paper on crossbar-based inference accuracy under drift
---
Tags: #hardware #analog #compute #edge #kp