Responsible AI in Amazon Web Services: Q&A with Diya Wynn

AI Basics


Diya Win

The release of ChatGPT last year signaled many of the major advances machine learning has made. But how do we ensure that this great power is being used responsibly, without prejudice or malice?

Diya Wynn of Amazon Web Services is a Senior Practice Manager for Responsible AI. Recently, she sat down with her New Stack to discuss all things Responsible AI.

At AWS, Wynn has created a customer-facing, responsible AI practice and built a team of individuals from diverse backgrounds, including members of the LGBTQIA+ and various disability communities. Her goal was to bring responsible AI to her AWS customers through her 7 Pillar Framework for Comprehensive and Responsible AI Use.

After dealing with a surge in customer questions, she realized there was room to build a customer-facing department to help develop and implement responsible AI practices.

A lifelong engineer, Wynn expressed a desire to pursue a career in engineering by the third grade, while at the same time, not every home had a computer, and because of his high reading and math scores, he was initially I received a computer from She continued her undergraduate education at Spelman College, NYU School of Tandon where she studied Management of Technology in Engineering, Harvard University Professional School and she studied Artificial Intelligence and Ethics at MIT Sloan School of Management. .

The New Stack: With such a diverse background, why should we focus on ethics over other aspects?

Diya Win: Not only as a trained technician, but as someone who thinks holistically about the world and its interactions, I have a voice and a point of view. Technology has a way of shaping and changing the way we interact with the world. This is especially important when we start thinking about the future.

i have two sons. The eldest son is in high school and the youngest is in junior high school. Are we prepared for what our children will encounter in the future? I don’t think our education system is doing enough to change technology to help prepare our students for tomorrow’s jobs.

I started researching and found three important things. One was data, data relevance, importance, value, and how it shapes how we engage with our work. Second was artificial intelligence and robotics, and third was virtual worlds, AR/VR. All of these have elements that are driving and shaping the way the world evolves.

And it lacks the voices and perspectives of people who look like me and my sons.

What is responsible AI?

Responsible AI is a holistic approach that provides governance, structures, processes, alignment of human resources, and technical solutions that can be leveraged to address bias, risk, performance, and other categories.

AWS has a guiding structure and definitions that have evolved over time and across different teams across the organization. Teams work across the business to support a responsible view of AI, but each team has ownership and responsibility for transparency and accountability, and fairness, robustness, privacy, and security. indicates that there is They all have a responsibility to responsibly introduce AI into the services they develop.

What is AWS doing to democratize responsible AI?

AWS has a wide range of strategies. We are committed to turning theory into action. This means how we build our services and how we are changing and impacting the work I do with my customers. In other words, enabling customers to bring their practices to life and operate within their organizations.

We invest in education and training to create a more diverse future workforce. There is an AI/ML Scholarship Program that normally brings in potentially underrepresented people to support research in artificial intelligence and machine learning. We also focus on training and educating people who are part of the product and machine learning lifecycle. Because we need to understand and be aware of potential areas of risk and how to mitigate them.

A final area from a business perspective is how to invest in scientific advances in responsible AI. We make huge investments and continue to work with institutions. Scholarships and research grants offered in NSF ways help advance research in the area of ​​responsible AI. We partner with institutions working to advance standards, all of which contribute to the growth of an ecosystem of individuals paying attention to this topic.

Let’s talk about bias…

It is a very real understanding that lack of diversity can create opportunities for bias. The other reality is that we all have biases, right? And sometimes we build these biases into our systems. This is especially true when looking at historical data.

Understanding this as well as bringing intention to how we approach and address it by eliminating it or making decisions with conscious awareness of where it exists You have to create a structure.

How is that bias addressed?

There are many things we can do. We have to deliberately bring our voices into the room. Make people and teams aware that education is important and needs attention. There are things you can do, like checklists and persona definitions, to get people thinking about who else might be included. Are you thinking about the stakeholders and everyone to whom the product/service is offered? Are you incorporating their point of view?

Then, of course, there is the aspect of having people physically present, but I am aware that that is a technology challenge. But let’s be real. When I discuss a project with a customer, I don’t expect them to hire a new team. Because we want diversity. One of the values ​​my team brings is that diverse individuals, diverse backgrounds, and diverse disciplines come together to really help our customers.

Another thing you can do is tap into your Chief Diversity and Inclusion (DEI) Officer. The team invests in and employs her DEI officers who are trained to think about bias, understand inclusion, and look for ways in which processes and structures bring representation and perspective. perspective of others.

How receptive are your customers to adopting responsible AI practices?

Customers fall into one of three categories. We have customers who have demonstrated some impact on their systems or acknowledged areas of bias. Some examples have been published. Because of that exposure, they are interested in finding solutions or incorporating some practices to alleviate their pain.

The other customer really cares about doing the right thing. They understand and are aware of some of the potential risks and want to build a system that their customers trust. They are asking, ‘What can we do?’, ‘Are there any practices we can implement?’

Another group of customers we see are happily waiting. They are interested, listening and watching what is happening in their markets and industries as conversations are generated and technologies and products are released. They are asking questions but waiting for regulation. We are not ready to make that investment because nothing forces us to make changes or introduce new practices. But we are not far from it.

The NIST AI Risk Framework was just released in January of this year. This means that there are standards that customers expect individuals and businesses to adhere to. ISO 42001 will be published soon and will include risk management and governance structures. The EU AI law is due to be signed next year.

NIST Timeline for AI Risk Management Framework.

What do you see as the biggest challenges ahead?

way to think. The first part of the mindset challenge is that some people tie her first two areas of prejudice specifically to gender and race and think, “This doesn’t apply to me.” Whether you’re serving an application that concerns someone’s gender or race, you need to think about responsible AI.

For example, if a model trained on a commercial dataset is used in a religious or public sector context, it will not be able to derive insights in the same way due to its commercial bias. No data are available to help support the religious or public sector situation. Having a more holistic and inclusive look is important.

Another mindset challenge is knowing that representativeness and diversity are important in the way products are designed, and that diverse perspectives can improve business results and outcomes. We know this and research has proven it, but why isn’t it being done?

We are not getting the diversity we need because it requires a mindset shift and it is not easy. If we knew things were unfair, and we knew that just changing them would make them fair, we wouldn’t have conversations like this and the problem would be solved. It is more difficult because it involves reshaping the thinking of

And finally, there are technology issues that have not yet been fully resolved. Research and investments are ongoing to ensure models are fair and unbiased, support technology holistically, and identify ways to measure inclusion.

With the widespread availability of learning models and AI technology, do you think responsible AI is moving forward or backward?

I think the importance of responsible AI will definitely increase. We’ve been talking about AI, and perhaps in the last five to seven years, there’s been a flurry of conversations flooding us. In some ways, you might say, this is another hype cycle, but I also think it’s an excellent proof of why we need responsible AI.

I don’t know how many times people have had “Oh my gosh!” conversations. We’re looking at all this data and some of the data is biased and people are asking, “Why did you get that result?” This is because the data are skewed. So besides retrieving and feeding back lots of data, you have to do something else. This definitely helps keep the conversation moving.

group Created by sketch.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *