ONNX Runtime On-Device Training — Visual Studio Magazine

Machine Learning


news

Alternatives to Microsoft Deep Learning: On-Device Training for ONNX Runtime

Microsoft announced on-device training of machine language models using the open source ONNX Runtime (ORT).

ORT is a cross-platform machine learning model accelerator that provides interfaces for integrating hardware-specific libraries that can be used in models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks.

This open-source project is just one part of the company’s larger AI push that began late last year with the debut of a sentient voice ChatGPT chatbox by Microsoft partner OpenAI. This push has revitalized AI and ML development in the Microsoft development space. . We are already powering machine learning models for key Microsoft products and services across Office, Azure, Bing, and dozens of community projects.

According to Microsoft, ORT provides a simple experience for AI developers to run their models on multiple hardware and software platforms, speeding up server-side inference and training, as well as mobile devices and the web. He said it has the ability to reason on the browser. Last week, we announced a new on-device training feature. This extends the ORT-Mobile inference service to enable training on edge devices, allowing developers to take inference models and train them locally on-device using data residing on the device. becomes easier. Devices — To improve your user experience.

According to the company, new on-device training capabilities will allow application developers to personalize the user experience without compromising privacy, with practical applications falling into two broad categories:

  • Federated Learning: Using this technique, we can train a global model on distributed data without compromising user privacy.
  • Personalized learning: This technique involves fine-tuning models on the device to create new, personalized models.
[Click on image for larger view.] High-level workflow for personalization using ONNX Runtime (Source: Microsoft).

“In contrast to traditional deep learning (DL) model training, on-device training requires efficient use of computing and memory resources,” Microsoft said in a blog post on May 31. . “Furthermore, the compute and memory configurations of edge devices vary significantly. To support these unique needs of edge device training, we are framework-agnostic and built on top of existing C++ ORT core functionality. created an on-device training feature that

“On-device training enables application developers to use the same binary for inference and training. At the end of the training session, the runtime generates an optimized inference-ready model, which can be used on the device for a more personalized experience.Similar to federated learning for scenarios, the aggregation happens server-side, so the runtime provides the model difference.”

The main advantages of this approach are summarized as follows.

  • Reduce device resource consumption (battery life, power usage, multi-app training) with a memory and performance efficient local trainer.
  • Optimized binary size to fit the tight constraints of edge devices.
  • Simple API and multiple language bindings make it easy to extend across multiple platform targets (available now — C, C++, Python, C#, Java; coming soon — JS, Objective-C, and Swift) .
  • Developers can extend existing ORT inference solutions to enable training at the edge.
  • The same ONNX model and runtime optimization can run across desktop, edge, and mobile devices without redesigning training solutions across platforms.

The development team plans to add support for iOS and web browsers, as well as allow for further optimizations to make the technology more efficient. The developer can also look forward to more detailed instructions and tutorials coming out in the coming months, as Microsoft is soliciting feedback on his GitHub repository.

About the author


David Ramel is an editor and writer at Converge360.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *