Compute-in Memory Chip Promises Increase Efficiency and Privacy in Federal Learning Systems

Machine Learning


New and promising memory chips for federated learning

Federated Learning (FL) using Memristor chips. credit: Nature Electronics (2025). doi:10.1038/s41928-025-01390-6

Over the past few decades, computer scientists have developed increasingly sophisticated machine learning techniques that allow them to predict specific patterns by analyzing large amounts of data and learn to complete tasks effectively. However, some studies highlight the vulnerabilities of some AI-based tools, indicating that the sensitive information they are being fed can be potentially accessible by malicious third parties.

A machine learning approach that can provide greater data privacy is federated learning, which involves co-training of shared neural networks by various users or stakeholders who do not have to exchange raw data with each other. This technique can benefit from AI, but is particularly advantageous when applied to sectors known to store highly sensitive user data such as healthcare and finance.

Researchers at Tsinghua University, China Mobile Research Institute and Hebei University have recently developed a new memory chip based on memorizers that can perform calculations and store information, and memorizers that can store both calculations by adapting resistors based on currents that have flowed in the past. Their proposed tips are outlined in published papers Nature Electronicsit has been found to increase both efficiency and security of the federated learning approach.

“Federation learning provides a framework for multiple participants to collectively train neural networks while maintaining data privacy, and is generally achieved through homogeneous encryption,” writes Xueqi Li, Bin Gao and colleagues. “However, implementation of this approach at the local edge requires significant generation, generation of error polynomials, and extensive computation, resulting in substantial time and energy consumption.

“We report a memorist computing in-memory chip architecture with in situ physical capabilities for key generation and an in situ true random number generator for error polynomial generation.”

Because both computation and stored information can be achieved, the new memorist-based architecture proposed by the researchers may reduce data movement and limit the energy required by various stakeholders to collectively train artificial neural networks (ANNs) via federated learning.

The team's chip also includes physical invalidation capabilities, a hardware-based technique that generates secure keys during encrypted communications, and a true random number generator, a way to generate unpredictable numbers for encryption.

“Our architecture includes competing formation array operations, memory-based entropy extraction circuit design, and redundant residue number system-based encoding schemes — error rate calculations, physical removable features, and true random number generators implemented within the same memorist array, wrote the researchers.

“To illustrate the capabilities of this memorist-based federated learning, we conducted a case study in which four participants collaborated on a two-layer long-term memory network with 482 weights for sepsis prediction.”

To assess the potential of memory computing chips, researchers have used it to enable collective training of long-term memory networks, a deep learning technique often used to make predictions based on sequential data, text, or medical records by four human participants. Four participants co-trained this network to predict sepsis. This is a serious and potentially fatal condition that arises from a serious infection, based on patient health data.

“The test accuracy of a 128 kb memorister array is only 0.12% lower than that achieved with software-intensive learning,” the author writes. “Our approach also shows reduced energy and time consumption compared to traditional digital coalition learning.”

Overall, the results of this recent study highlight the potential of memorist-based computer-in-memory architectures to enhance the efficiency and privacy of federated learning implementation. In the future, the chips developed by Li, Gao and their colleagues could be further improved and used to co-train other deep learning algorithms in a variety of real-world tasks.

Author Ingrid Fadelli writes for you, edited by Lisa Lock and fact-checked and reviewed by Robert Egan. This article is the result of the work of a careful human being. We will rely on readers like you to keep independent scientific journalism alive. If this report is important, consider giving (especially every month). You'll get No ads Account as a thank you.

detail:
Xueqi Li et al, federation learning using memory stool computing in-memory chips with in situ physical capabilities and true random number generators; Nature Electronics (2025). doi:10.1038/s41928-025-01390-6

©2025 Science X Network

Quote: Memory-in-Memory chip demonstrates the potential for improved efficiency and privacy in federal learning systems, obtained on June 25, 2025 from https://techxplore.com/2025-06.

This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *