Ethical issues surrounding the use of artificial intelligence have been revived by advancements such as ChatGPT, which should serve as a reminder for businesses to focus on responsible AI use.
This week, the focus was on the legal implications of AI and how to incorporate responsible AI into business operations, according to experts at MIT Technology Review’s annual EmTech Digital event. The Biden administration also focused on responsible AI this week by taking action to foster responsible AI innovation.
Regina Sam Penti, partner at global law firm Ropes & Gray, said that as the use of generative AI explodes, it will be especially important for companies to pay attention to current lawsuits and emerging regulations. says. AI developers like Stability AI are increasingly facing lawsuits over their use of data in AI models. Stability AI offers an AI tool called Stable Diffusion that creates images from text, and the images it creates are based on the work of real artists, causing several copyright lawsuits.
Engaging the legal system can slow down development and deployment as it forces companies to pause and assess the risks of using data from certain sources, she said. .
Although AI developers currently face the responsibility of these issues, companies should also approach the implementation of AI systems cautiously and pay attention to contract negotiations with AI developers to mitigate risks. there is.
During a discussion of the legal implications of AI, Penti said, “Nearly every case we see is directed at the creators of these systems because they are concerned with the use of data and the Because we have to deal with model training.” “If you’re building these systems, especially if you’re using a lot of data, you could face some liability.”
Incorporating AI Responsibly
Diya Wynn, Senior Practice Manager for Responsible AI at Amazon Web Services, said during the conference that companies need to focus on responsible AI use cases in line with their core values from the outset. said. She said AWS defines responsible AI as an operational approach that considers people, processes, and technology to reduce unintended consequences and improve the value of AI models.
According to Wynn, the focus on people in AI’s operating model is even more important than the technology itself. When introducing AI systems into a business environment, she said it is important to include training and education to increase awareness and understanding of where risks can exist and how to minimize them. I was.
Wynn said some of the questions companies need to ask when implementing AI systems are: “Who needs to be involved, how do we consider skills and upskilling, what do we do in terms of process? , have the right governance structure or support our efforts.”
Most of the challenges organizations face with responsible AI are based on well-informed questions about how the AI is being used, what data systems can access, and how the AI is being trained and tested. Wynn says that he didn’t do the work up front to make the decisions that were necessary.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, Wilmington Star News crime and education reporter wabash plain dealer.