Systems controlled by next-generation computing algorithms have the potential to produce better and more efficient machine learning products, new research suggests.
Researchers used machine learning tools to create digital twins, or virtual copies, of electronic circuits that exhibit chaotic behavior. They found that they were successful in predicting how computing algorithms would behave.
Having access to efficient digital twins could have a major impact on how scientists develop future autonomous technologies.
“The problem with most machine learning-based controllers is that they consume a lot of energy or power and are slow to evaluate,” said Robert Kent, lead author of the study.
“It was also difficult to develop traditional controllers because chaotic systems are very sensitive to small changes.”
How digital twins can advance future technologies
The team's digital twin was built to optimize the controller's efficiency and performance, and the researchers found that this reduced power consumption.
It was primarily trained using a type of machine learning approach called reservoir computing, which makes it very easy to accomplish this.
Research shows that similar-sized computer chips are used in devices such as smart refrigerators, but this new computing power will make new models better able to handle dynamic systems such as self-driving cars and heart monitors. Especially equipped with excellent equipment. Quickly adapts to the patient's heart rate.
Kent explained: “Large machine learning models need to consume a lot of power to process data and derive appropriate parameters, but our model and training are so simple that we can learn the system on the fly. You can do it.”
Realizing complex computing algorithms
To test this theory, the researchers instructed the model to complete complex control tasks and compared its results with those of previous control techniques and computing algorithms.
This study reveals that their approach achieves higher accuracy on the task than its linear counterpart and has significantly less computational complexity than previous machine learning-based controllers.
The results showed that their computing algorithm requires more energy to operate than a linear controller, but this trade-off means that the team's model lasts longer on power-up and is not currently commercially available. This means that it is significantly more efficient than machine learning-based controllers. .
To advance the results, future research could focus on training the model to explore other applications such as quantum information processing.
“Not enough people in industry and engineering know about these kinds of algorithms, and one of the big goals of this project is to get more people to know about them,” Kent concludes. I did.