High-dimensionality matrix-vector multiplication (MVM) is a dominant kernel in signal-processing and machine-learning computations that are being deployed in a range of energy- and throughput-constrained applications. In-memory computing (IMC) exploits the structural alignment between a dense 2D array of bit cells and the dataflow in MVM, enabling opportunities to address computational energy and throughput. Recent prototypes have demonstrated the potential for 10× benefits in both metrics. However, fitting computation within an array of constrained bit-cell circuits imposes a number of challenges, including the need for analog computation, efficient interfacing with conventional digital accelerators (enabling the required programmability), and efficient virtualization of the hardware to map software. This article provides an overview of the fundamentals of IMC to better explain these challenges and then identifies promising paths forward among the wide range of emerging research.
Authors: Naveen Verma, Hongyang Jia, Hossein Valavi, Yinqi Tang, Murat Ozatay,
Lung-Yen Chen, Bonan Zhang, and Peter Deaville
IEEE Solid-State Circuits Magazine (Volume: 11, Issue: 3, Summer 2019)
https://doi.org/10.1109/MSSC.2019.2922889
Join our mailing list for exclusive updates, releases, and exciting news from EnCharge AI.