The researchers say that the analog network of resistors enables “processor-less machine learning.”

Machine Learning


Researchers at the University of Pennsylvania have come up with an interesting approach that could help address the ever-growing power demands in the field of machine learning: working directly with analog networks of resistors, rather than using a processor.

“Standard deep learning algorithms require large nonlinear networks to differentiate, which is a time-consuming and power-intensive process,” the researchers explain. “Electronic learning metamaterials have the potential to provide fast, efficient, and fault-tolerant hardware for analog machine learning, but existing implementations are linear, severely limiting their capabilities. Because these systems are very different from artificial neural networks or brains, the feasibility and usefulness of incorporating nonlinear elements has not been explored.”

So far, that has been the case. The team's research introduces a nonlinear learning metamaterial: an analog electronic network of transistor-based resistive elements. It is not a traditional digital processor and cannot perform the tasks that a traditional processor can, but it is specifically tailored for machine learning workloads and has proven capable of performing calculations that linear systems cannot handle without the involvement of a processor other than the Arduino Due to make measurements and connect to MATLAB.

“Each resistor is simple and meaningless in itself,” physicist Sam Dillaborough, first author of the study, explained in an interview. MIT Technology Review“But when you put them in a network, you can train them to do different things.”

The team has already demonstrated that the same core technique is used in image classification networks, and their latest work extends the concept to nonlinear regression and exclusive-or (XOR) operations. Even better, the technique shows the potential to outperform traditional approaches that throw the problem at digital processors. “We find that our nonlinear learning metamaterial reduces modes of training error (in that order: mean, slope, and curvature),” the team claims, “similar to spectral bias in artificial neural networks.”

“The circuit is robust against damage, can be retrained in seconds, executes learned tasks in microseconds, and consumes only picojoules of energy in each transistor. This suggests great potential for high-speed, low-power computing in edge systems such as sensors, robot controllers, and medical devices, as well as large-scale manufacturability for implementing and studying emergent learning,” the researchers continue.

Of course, there are problems: In its current form, a solderless breadboard-style prototype, the metamaterial system consumes about 10 times more power than state-of-the-art digital machine-learning accelerators. But as it scales, Dillabaugh says the technology should enable efficiency gains and the ability to remove external memory components from the bill of materials.

The team's findings were published as a preprint. On the Cornell University arXiv server.

Main article image courtesy of Felice Macera.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *