An Analog RRAM System Quickly and Accurately Computes Matrix Equations

Design Sem Nome 2025 10 30T150415.166
Analog computers perform calculations by manipulating physical quantities—such as electrical currents—that correspond to mathematical variables, rather than representing data with discrete binary values (0 and 1) as digital computers do.
Conceptual diagram of our high-precision analog matrix inversion solver. Image Credits: Zhong Sun, Peking University.

Analog computers perform calculations by manipulating physical quantities—such as electrical currents—that correspond to mathematical variables, rather than representing data with discrete binary values (0 and 1) as digital computers do.

Although analog computing systems can handle general-purpose tasks effectively, they are often vulnerable to noise (i.e., background or external interference) and typically less accurate than digital counterparts.0.

High-Precision RRAM Analog Computer by Peking University

A research team from Peking University and the Beijing Advanced Innovation Center for Integrated Circuits has created a scalable analog computing device capable of solving matrix equations with exceptional precision. A paper in Nature Electronics describes a system built using miniature non-volatile memory components called resistive random-access memory (RRAM) chips.

“I have been studying analog computing since 2017,” said Zhong Sun, an assistant professor at Peking University and the paper’s senior author, in an interview with Tech Xplore.

“We call our method modern analog computing because it targets solving matrix equations—unlike traditional analog computing, which focuses on differential equations—and relies on nonvolatile resistive memory arrays rather than standard CMOS circuits.”

Sun’s Team Pursues High-Precision Analog Computing

Over the last ten years, Sun and his team have created numerous analog computing systems. However, most of these designs demonstrated considerably lower precision than digital computers when executing computational tasks, limiting their practicality for real-world use.

“Around 2022, we began tackling this challenge head-on, with the goal of achieving high-precision analog computing comparable to modern digital systems,” Sun explained.

“In our latest paper, we present a fully analog matrix equation solver that attains 24-bit fixed-point precision (equivalent to FP32). This was accomplished by integrating a low-precision matrix inversion circuit—originally developed in 2019—with high-precision matrix–vector multiplication using bit-slicing across multiple resistive memory arrays.”

From Low-Precision Solvers to Hybrid Analog Computing

The team’s new analog matrix equation solver builds upon a circuit Sun and his collaborators designed in 2019 during his postdoctoral work at Politecnico di Milano. Although that earlier circuit could solve matrix equations of the form (Ax = b) in a single step, its accuracy was notably lower than that of digital systems.

“In our latest work, we combined the low-precision solver with high-precision matrix–vector multiplication using a standard bit-slicing method, allowing for iterative refinement of the results,” Sun said.

“During each iteration, the low-precision inversion circuit generates an approximate solution, while the high-precision operation adjusts it by determining the correction’s direction and magnitude. This hybrid strategy achieves rapid convergence—much faster than traditional gradient-descent-based methods.”

To validate the scalability of their analog computing approach, the team built an 8×8 array-based circuit and tested it on various matrix equations. The circuit successfully solved 16×16 matrix equations and was later extended to handle larger ones, such as 32×32 systems.

Scaling High-Precision Analog Computing for AI and Communications

The team’s matrix equation solver could be further enhanced and may pave the way for the creation of other high-precision analog computing systems. In the long term, this technology could play a key role in advancing fields such as wireless communications and artificial intelligence (AI).

“Our most significant achievement is showing that fully analog matrix computing can reach a precision level comparable to floating-point digital systems while maintaining scalability,” Sun noted.

“Our next objective is to expand the system by developing larger array-based circuits and integrating all components onto a single chip, combining matrix inversion and matrix–vector multiplication into one unified, chip-level platform.”


Read the original article on: Tech Xplore

Read more: Qualcomm Set to Challenge Nvidia with its own Line of AI Chips

Scroll to Top