On Thursday, the AWS Financial Services Symposium in New York hosted a panel on “Responsible AI,” where three stakeholders discussed how to deploy the technology with an equitable lens.
Michael Kearns, Amazon Research Associate and Professor in the Department of Computer and Information Science at the University of Pennsylvania, moderated the chat with Michael Gerstenhaber, Amazon's Vice President of Product. Anthropologicaland Jonah Powell, managing director of technology research and innovation. Depository Trust and Clearing Corporation (DTCC).
Discussing the challenges and opportunities of deploying responsible AI in the financial services industry, Kearns said his role at AWS also includes overseeing operational practices around responsible AI. He said he and his team have put in place technical and operational procedures to audit training models for concerns like demographic bias and privacy needs. “Now with generative AI, you have a whole new set of concerns that can come up, like hallucinations and issues that are not privacy issues but are close to it, like intellectual property,” Kearns said.
Finding a path to responsible AI can be a very collaborative effort within an organization. Powell says DTCC began its GenAI effort about a year ago, tasked with defining a strategy for moving forward. “We did extensive research across DTCC, doing internal research, external research, surveys, etc.,” she says. “We came up with about 400 use cases, all of which we had to vet.” After vetting those use cases, they were evaluated against criteria like feasibility, and further narrowed them down to a handful of what she calls “strong use cases.”
“Productivity optimization is where we focused the most,” Powell says, including developer productivity and legacy code modernization. She says many of the use cases focused on a central theme of integration: taking data, summarizing it, and putting it into an easy-to-understand format.
Anthropic, whose founders include former OpenAI recruits, is a San Francisco-based startup that is researching AI safety factors, including AI risks and opportunities to develop trustworthy AI systems. “My main goal is to help engineers use generative AI safely,” Gerstenhaber said.
Policymakers and regulations can play a role in shaping responsible AI, but the policy environment is uneven when it comes to AI. “I come from the digital asset world, where regulation, or the lack of it, can really kill in terms of innovation progress,” Powell said. “Japan is pretty advanced in the regulatory environment and digital assets,” he said. Japan has already made moves to draft guidelines on AI and copyright issues. “We're still struggling in the U.S.,” Powell said, adding that he hopes there will be clarity in domestic regulations in these areas, but that policies won't be so strict that they halt progress.
Kearns asked about AI guardrails and training, specifically those that try to curb bias that can be built into how models are trained, saying that while many models are getting better around bias, it's not necessarily because of how they're trained.
Gerstenhaber touted the value of Anthropic's methodology, Constitutional AI, which holds models to a written list of principles, or constitutions, that their responses should follow. He said steps can be taken in collecting training data that's deemed “safe,” but Constitutional AI can rapidly perform automated evaluations to ensure the AI is performing responsibly. “I'm very optimistic about the idea that we can enforce these things in training and provide that level of safety as a service,” he said.
Powell ended with a bit of reality-check on responsible adoption of AI that might temper some of the assumptions surrounding the technology, but remains adamant about its inevitable presence within organizations. “I would say to people, AI and these training models aren't coming to take your job, but someone using AI might,” she said. “It's about giving the right tools to the right people and democratizing AI across the company.”
