Biological calculations are the expanding area of focus for artificial neural networks (NN) development. In this field, DNA strands and protein enzymes exist as digital inputs that are commonly used in silicon chip-based classical calculations.
Recent research has adopted chemical reaction networks (CRNs). This utilizes a biochemical process for calculations that convert interactions containing biochemical species into graphical forms. However, in many cases, CRNs suffer from random variations known as noise breakdown, which results in small disturbances that lead to large output deviations.
Sunghwa Kang and Jinsu Kim designed CRNs that implement both NN calculations and the associated training processes that support them, reducing noise confusion and ensuring more reliable operation under stochastic variations.
“In contrast to previous CRN-based implementations, which often rely on piecewise or discrete functions, a key distinctive feature of our approach is the use of smooth activation functions in which derivatives are continuous,” Kim said. “This is important for NN training, as the gradient determines how the parameters are updated. As chemical reaction systems inherently exhibit noise, it is essential that the calculations be robust to such variations.”
Researchers have achieved “one-pot computing,” where calculation and training processes occur simultaneously. Previous studies have alternating calculation and training stages, which are usually required by achieving full NN functionality.
“In contrast, CRN responses progress simultaneously, and the division of calculations and training is dominated by timescale separation,” Kim said. “Specifically, it can slow down certain responses related to training dynamics to prevent interference with faster forward calculations. Interestingly, this naturally mimics the classic gradient descent-based optimization that updates NN parameters with small step sizes.”
sauce: “Noise Robust Training of Artificial Neural Networks Using Chemical Reaction Networks,” Sunghwa Kang and Jinsu Kim, APL Machine Learning (2025). You can access the article. https://doi.org/10.1063/5.0271766 .
