Published Papers
-
A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing
This paper presents a scalable neural-network (NN) inference accelerator in 16nm, based on an array of programmable cores employing mixed-signal In-Memory Computing (IMC), digital Near-Memory Computing (NMC), and localized buffering/control. IMC achieves high energy efficiency and throughput for matrix-vector multiplications (MVMs), which dominate NNs; but, scalability poses numerous challenges, both technologically, going to advanced nodes…
-
Fully Row/Column-Parallel In-memory Computing SRAM Macro employing Capacitor-based Mixed-signal Computation with 5-b Inputs
This paper presents an in-memory computing (IMC) macro in 28nm for fully row/column-parallel matrix-vector multiplication (MVM), exploiting precise capacitor-based analog computation to extend from binary input-vector elements to 5-b input-vector elements, for 16x increase in energy efficiency and 5x increase in throughput. The 1152(row)x256(col.) macro employs multi-level input drivers based on a digital-switch DAC implementation,…
-
In-Memory Computing: Advances and prospects
High-dimensionality matrix-vector multiplication (MVM) is a dominant kernel in signal-processing and machine-learning computations that are being deployed in a range of energy- and throughput-constrained applications. In-memory computing (IMC) exploits the structural alignment between a dense 2D array of bit cells and the dataflow in MVM, enabling opportunities to address computational energy and throughput. Recent prototypes…
-
A Programmable Heterogeneous Microprocessor Based on Bit-Scalable In-Memory Computing
In-memory computing (IMC) addresses the cost of accessing data from memory in a manner that introduces a tradeoff between energy/throughput and computation signal-to-noise ratio (SNR). However, low SNR posed a primary restriction to integrating IMC in larger, heterogeneous architectures required for practical workloads due to the challenges with creating robust abstractions necessary for the hardware…
News
-
EnCharge AI reimagines computing to meet needs of cutting-edge AI
Princeton University — A startup based on Princeton research is rethinking the computer chip with a design that increases performance, efficiency and capability to match the computational needs of technologies that use AI. Using a technique called in-memory computing…
-
EnCharge AI launches with $21.7M Series A to enable Edge AI at scale
Following years of research and development at Princeton University, EnCharge AI emerges from stealth led by a world-class, multi-disciplinary team from Meta, NVIDIA, Qualcomm, and IBM.