Last week, at the Responsible AI Leadership: Global Summit on Generative AI, co-hosted by the World Economic Forum and AI Commons, I had the opportunity to connect with colleagues around the world who are thinking deeply and taking action on responsible AI. We can gain a lot when we get together, discuss common values and goals, and work together to find the best way forward.
What these conversations, and more recent similar ones, have reminded me of is the importance of learning from others and sharing what we learn. Two of the most frequently asked questions were, “How are we doing responsible AI at Microsoft?” and “How well are we doing in this moment?” Let me answer both.
At Microsoft, responsible AI is a set of steps we take company-wide to ensure our AI systems adhere to Microsoft’s AI principles. It’s a custom and a culture. Practice is how you formally operate responsible AI across your company through governance processes, policy requirements, and tools and training to support implementation. Culture is how you enable your employees not only to embrace responsible AI, but to actively advocate for it.
There are three key areas that I consider essential when it comes to taking the responsible AI journey.
1. Leadership must be committed and involved: To make responsible AI meaningful, starting at the top is not a cliché. At Microsoft, Chairman and CEO Satya Nadella has endorsed the creation of a responsible AI council to oversee company-wide efforts. The Council will be chaired by Microsoft’s Vice Chairman and President, who reports to me, Brad Smith, and Microsoft’s Chief Technology Officer, Kevin Scott. This joint leadership is central to our efforts and underscores Microsoft’s commitment not only to leadership in AI, but leadership in responsible AI.
The Responsible AI Council meets regularly to bring together representatives of key research, policy and engineering teams dedicated to responsible AI, including the Aether Council and the Office of Responsible AI, as well as senior business partners responsible for implementation. gather. I think the conference is challenging and refreshing. We are dealing with a difficult set of problems, and progress is not always linear, which makes it challenging. Still, we know we need to face tough issues and be held accountable. The collective energy and wisdom of the members of the Responsible AI Council keeps the meetings fresh and often brings home new ideas that help push the cutting edge of technology forward.
2. Build a comprehensive governance model and actionable guidelines: My team’s primary responsibility at the Office of Responsible AI is to build and coordinate the company’s governance structure. Microsoft started its responsible AI journey about seven years ago and my office has been around since 2019. That’s when we learned we needed to create a comprehensive governance model, encouraging engineers, researchers and policy makers to work side by side. -Stand shoulder to shoulder for our AI principles. No single team or single discipline responsible for responsible or ethical AI has achieved our objectives.
We took a page from our playbook on privacy, security, and accessibility and built a governance model that embeds responsible AI throughout our company. We have senior leaders tasked with spearheading Responsible AI within each of our core business groups, and for more regular and direct involvement, we have a Responsible AI “ We are continuously training and growing our large network of Champions. Last year, he published the second version of his Responsible AI Standard, an internal playbook for building AI systems responsibly. We hope that you will take a look and inspire your own organization. I welcome any feedback on that as well.
3. Invest in and empower your employees: Over the years, we’ve invested heavily in responsible AI, with new engineering systems, research-driven incubation, and of course people. Nearly 350 people are currently working on responsible AI, of which just over a third of him (129 to be exact) are dedicated to AI full-time. The remaining employees have responsible AI responsibilities as core to their work. Our community members are responsible for policy, engineering, research, sales and other key functions and are involved in all aspects of our business. This number has grown in step with the growing interest in AI since we launched our Responsible AI initiative in 2017.
Going forward, we will further contribute to the responsible AI ecosystem by hiring new and diverse talent, allocating additional talent to devote full time to responsible AI, and upskilling more people across the company. We know we need to invest. We have a leadership commitment to do just that and will share more on our progress in the coming months.
Organizational structures are critical to our ability to meet ambitious goals and have changed over time as our needs evolved. One of his recent high-profile changes was on the former Ethic & Society team. Their early work was crucial to getting us to where we are today. Last year, we made two significant changes to our responsible AI ecosystem. First, we made a significant new investment in the team responsible for the Azure OpenAI service. This includes cutting-edge technologies such as GPT-4. We then infused expertise into some of our user research and design teams by moving members of our former Ethics & Society team into these teams. Following these changes, we made the difficult decision to downsize the remaining members of the Ethics & Society team, impacting 7 people. The decision to influence my colleagues was not an easy one, but it was based on my experience of the most effective organizational structure to ensure that responsible AI practices are adopted throughout the company.
A central theme of our responsible AI program and its evolution is the need to stay humble and always learn. Responsible AI is a journey, a company-wide commitment. Gatherings like last week’s Responsible AI Leadership Summit also remind us that our collective work on responsible AI is stronger when we learn and innovate together. Documentation such as Responsible AI Criteria and Impact Assessment Templates, and customers and new Bings using Azure OpenAI services. The future opportunities for AI are tremendous. Continued cooperation and open exchanges between governments, academia, civil society and industry are needed to support progress towards the common goal of AI serving people and society.
