Microsoft is making sure to include Azure at its Build 2023 developer conference, AI Fest, this week.
As companies consider experimenting with and deploying generative AI, they may look to the public cloud or similar scalable compute and storage infrastructure to run things like Large Language Models (LLMs).
Armed with ChatGPT, GPT-4, and other OpenAI systems, Microsoft has been pushing AI capabilities to every corner of its empire for months. Azure is no exception, OpenAI Service being one example, and after the Build conference Redmond’s public cloud has even more offers.
High on the list is an expanding partnership with Nvidia, which itself is scrambling to establish itself as an essential AI technology provider, from GPU accelerators to software. This week alone, the chipmaker announced a number of partnerships, including a partnership with Dell at Dell Technologies World and a partnership with the supercomputer maker at ISC23.
Bring Nvidia resources into Azure
Specifically, Microsoft will integrate software, development tools, frameworks and pre-trained models from Nvidia’s AI Enterprise suite into Azure Machine Learning, creating what Tina Manghnani, Product Manager for Machine Learning Cloud Platforms, described as “the first enterprise We are creating what we call a compliant and safe end product. An end-to-end cloud platform for developers to build, deploy and manage her AI applications, including custom large-scale language models. “
On the same day, Microsoft made generally available the Azure Machine Learning Registry, a platform for hosting and sharing machine learning building blocks such as containers, models, data, and tools for integrating AI Enterprise into Azure. AI Enterprise in Azure Machine Learning is also available in a limited technical preview.
“What this means is that customers who have an existing contract and relationship with Azure can take advantage of that relationship and consume from the cloud contract they already have to acquire Nvidia AI Enterprise and transfer it to Azure ML. It means you can get this by using it within a seamless enterprise-grade experience, or available separately on the instance of your choice,” said Manubir Das, vice president of enterprise computing at NVIDIA, at Build Open. told journalists a few days before.
Isolate your network to protect your AI data
Companies running AI operations in the cloud want network isolation as a key tool to ensure that their data is not exposed to other companies. Microsoft has features such as private link workspaces and data leakage protection, but there is no public IP option for corporate computing resources to train AI models. At Build, the vendor announced managed network isolation in Azure Machine Learning for choosing the best isolation mode for your company’s security policy.
Don’t miss our coverage of Build 2023
Unsurprisingly, more and more open source tools are entering the AI space. Last year, Microsoft partnered with Hugging Face to provide an Azure Machine Learning endpoint powered by technology from open source companies. At Build, the two organizations expanded their relationship.
Hugging Face already offers a curated set of tools and APIs, as well as a huge hub of ML models that you can download and use. Over time, a collection of thousands of these models will appear in Redmond’s Azure Machine Learning catalog, allowing customers to access and deploy them to managed endpoints in Microsoft’s cloud.
Add base model options
Redmond is also making its Azure Machine Learning foundational models available in public preview. Foundation models are powerful, sophisticated pre-trained models that organizations can customize with their own data for their own purposes and deploy as needed.
Foundation models help organizations build critical ML-powered applications tailored to their specific requirements without having to train models from scratch or offload processing or sensitive customer data. Therefore, it is becoming very important. cloud.
Nvidia released the NeMo framework that could help in this space, partnering with ServiceNow this month and with Dell this week with Project Helix along the lines of that.
“Over the past few months, as we have been working with large companies on generative AI, many companies want to harness the power of generative AI, but they are doing it in their own data centers. Or do you run it outside of the public cloud,” said Nvidia’s Das.
Resources such as open source and foundational models promise to reduce complexity and cost and make generative AI accessible to more organizations. ®
