In-memory computing (IMC) addresses the cost of accessing data from memory in a manner that introduces a tradeoff between energy/throughput and computation signal-to-noise ratio (SNR). However, low SNR posed a primary restriction to integrating IMC in larger, heterogeneous architectures required for practical workloads due to the challenges with creating robust abstractions necessary for the hardware and software stack. This work exploits recent progress in high-SNR IMC to achieve a programmable heterogeneous microprocessor architecture implemented in 65-nm CMOS and corresponding interfaces to the software that enables mapping of application workloads. The architecture consists of a 590-Kb IMC accelerator, configurable digital near-memory-computing (NMC) accelerator, RISC-V CPU, and other peripherals. To enable programmability, microarchitectural design of the IMC accelerator provides the integration in the standard processor memory space, area and energy-efficient analog-to-digital conversion for interfacing to NMC, bit-scalable computation (1–8 b), and input-vector sparsity-proportional energy consumption. The IMC accelerator demonstrates excellent matching between computed outputs and idealized software-modeled outputs, at 1b TOPS/W of 192|400 and 1b-TOPS/mm2 of 0.60|0.24 for MAC hardware, at VDD of 1.2|0.85 V, both of which scale directly with the bit precision of the input vector and matrix elements. Software libraries developed for application mapping are used to demonstrate CIFAR-10 image classification with a ten-layer CNN, achieving accuracy, throughput, and energy of 89.3%|92.4%, 176|23 images/s, and 5.31|105.2 μJ/image, for 1|4 b quantization levels.
Authors: Hongyang Jia, Hossein Valavi, Yinqi Tang, Jintao Zhang and Naveen Verma
IEEE Journal of Solid-State Circuits (Volume: 55, Issue: 9, September 2020)
https://doi.org/10.1109/JSSC.2020.2987714
Join our mailing list for exclusive updates, releases, and exciting news from EnCharge AI.