Compute-in-Memory Devices and Architectures for Efficient Information Processing

Feb
14

Compute-in-Memory Devices and Architectures for Efficient Information Processing

Wei Lu, University of Michigan

11:30 a.m., February 14, 2025   |   136 DeBartolo Hall

Modern computing needs are increasingly limited by the latency and energy costs of memory access. Emerging memory devices such as resistive random-access memory (RRAM) have shown potential to enable efficient computing architectures, as the data can be mapped as the conductance values of RRAM devices and computation can be directly performed in-memory. Specifically, by converting input activations into voltage pulses, vector-matrix multiplications (VMM) can be performed in analog domain, in place and in parallel, thus achieving high energy efficiency during operation.

Wei Lu

Wei Lu,
University of Michigan

In this presentation, I will discuss how practical neural network models can be mapped onto realistic RRAM arrays in a modular design. System performance metrics including throughput and energy efficiency will be discussed. Challenges such as quantization effects, finite array size, and device non- idealities will be analyzed, and techniques such as fine-grained structured-pruning and tensor-train factoring are explored to address the memory capacity concerns. At the architecture level, effective compiler needs to be developed to map the network graph on to the tiled weight-stationary architecture, and examples of different generations of networks will be presented. Beyond VMM, the internal dynamics of RRAM devices can be used to natively process temporal information embedded in the spiking inputs. Examples of temporal processing in the form of reservoir computing systems and 2nd-order memristor networks will be discussed. These systems can be directly integrated with neuromorphic sensors such as event-based cameras, or potentially form efficient bio-electronic interfaces.

Wei Lu is the James R. Mellor Professor of Engineering and professor in the electrical engineering and computer science department at the University of Michigan – Ann Arbor. He received B.S. in physics from Tsinghua University, Beijing, China, in 1996, and Ph.D. in physics from Rice University, Houston, TX in 2003.

From 2003 to 2005, he was a postdoctoral research fellow at Harvard University, Cambridge, MA. He joined the faculty of the University of Michigan in 2005. His research interest includes resistive-random access memory (RRAM) and memristor devices and in-memory computing systems, neuromorphic computing systems, aggressively scaled transistor devices, and electrical transport in low-dimensional systems. To date, Prof. Lu has published over 200 journal and conference articles with 43,000 citations and H-factor of 93. He is a recipient of the NSF CAREER award, an IEEE Fellow, and co-founder of Crossbar Inc, which develops RRAM products, and MemryX Inc., which develops efficient AI computing chips.