Analog in-memory computing (AIMC) is an emerging approach that uses analog memory devices such as phase-change memory (PCM), resistive RAM (ReRAM), and others to accelerate neural network training and inference. AIMC exploits the physical attributes of memory devices arranged in crossbar arrays to perform computations in place, eliminating data movement bottlenecks.

The IBM Analog Hardware Acceleration Kit (AIHWKit) is an open-source Python library that enables simulations of AIMC hardware, providing accurate models of non-ideal device characteristics and peripheral circuit effects. This allows exploring the impact of hardware constraints on attainable accuracy when using AIMC for AI workloads.

Key Features of AIHWKit

  • Seamlessly integrates AIMC simulations into PyTorch for defining and training neural networks
  • Provides hardware-calibrated noise models based on real measurements of PCM and ReRAM devices
  • Allows detailed configuration of tile sizes, peripheral circuits, and material properties
  • Supports inference evaluation with long-term noise modeling such as drift and 1/f noise
  • Implements state-of-the-art analog training algorithms such as Tiki-Taka and Mixed Precision

Performing Inference with AIHWKit

For inference, the toolkit allows modeling noise from analog-to-digital conversion, conductance drift over time, and statistical variations in conductance programming. The impact of techniques like drift compensation can be evaluated. Models trained in floating point can be converted to analog versions to estimate the inference accuracy.

Training Neural Networks with AIMC using AIHWKit

For training, detailed device models capture the conductance response to voltage pulses. Popular training algorithms are implemented that are designed to mitigate effects of limited device precision. The toolkit enables exploring tradeoffs between accuracy, material specifications, and algorithmic hyperparameters.

Conclusion

The AIHWKit enables rapid design space exploration of AIMC hardware without costly fabrication. By providing configurable and realistic models, it facilitates developing robust training algorithms and hardware architectures to unlock the potential of AIMC for energy-efficient AI acceleration. The open-source toolkit lowers the barrier for research in this emerging field.

Citations

July 18 2023, Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference — Manuel Le Gallo, Corey Lammie, Julian Buchel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, and Malte J. Rasch

None
neural network on a computer chip