Published Papers

A Programmable Neural-network Inference Accelerator Based on Scalable In-Memory Computing

Published on
October 12, 2022

This paper presents a scalable neural-network (NN) inference accelerator in 16nm, based on an array of programmable cores employing mixed-signal In-Memory Computing (IMC), digital Near-Memory Computing (NMC), and localized buffering/control. IMC achieves high energy efficiency and throughput for matrix-vector multiplications (MVMs), which dominate NNs; but, scalability poses numerous challenges, both technologically, going to advanced nodes to maintain gains over digital architectures, and architecturally, for full execution of diverse NNs. Recent demonstrations have explored integrating IMC in programmable processors [1,2], but have not achieved IMC efficiency and throughput for full executions. The central challenge is drastically different physical design points and associated tradeoffs incurred by IMC compared to digital engines. Namely, IMC substantially increases compute energy efficiency and HW density/parallelism, but retains the overheads of HW virtualization (state and data swapping/buffering/communication across spatial/temporal computation mappings). The demonstrated architecture is co-designed with SW-mapping algorithms (encapsulated in a custom graph compiler), to provide efficiency across a broad range of mapping strategies, to overcome these overheads.

Authors: Hongyang Jia, Murat Ozatay, Yinqi Tang, Hossein Valavi, Rakshit Pathak, Jinseok Lee and Naveen Verma

2021 IEEE International Solid- State Circuits Conference (ISSCC)
https://doi.org/10.1109/ISSCC42613.2021.9365788

Sign Up for Updates

Join our mailing list for exclusive updates, releases, and exciting news from EnCharge AI.

By clicking Sign Up you're confirming that you agree with our Privacy Policy.
Thank you for subscribing!
Oops! Something went wrong while submitting the form.