A Human Participatory Approach Is Key to Building Trust in AI, Experts Say – MeriTalk

Applications of AI

Federal experts said today that adopting a human-in-the-loop approach is critical to building trust in AI, but to ensure these humans are properly trained, Having data and AI literacy programs in place is also important, he said.

At ATARC’s Artificial Intelligence and Data Analytics Breakfast Summit on April 6, federal AI and data experts said a human-in-the-loop approach could help build trust in ethical AI technologies at institutions. I shared how it works for me.

Scott Beliveau, chapter director of the Advanced Enterprise Analytics Division at the US Patent and Trademark Office, explained how trust is critical to AI, and how new technologies can be applied to navigation and mapping apps in general, such as Google Maps and Apple Maps. compared to typical usage.

At some point, Beribault explained, we learned to trust the app and stop questioning when we were told to turn left instead of right. The new question is whether AI has reached the point where people give people a chance.

“At our agency, we kind of donate [AI] Opportunity, and the way we’re doing it — as with driving — we keep putting that person in the loop,” Beribaux said. Decision making is completely in the driver’s seat…and that’s how we’re trying to mitigate that bias and build that trust.”

Experts agreed that keeping humans in the loop helps, but also stressed the importance of training them to understand AI deeply.

Suman Shukla, director of the Data Management Section of the US Copyright Office, said: “If you don’t know where the whole AI process is going, or if you don’t understand your data and how it’s used, that’s going to have a big impact.”

“If there’s a human in the loop and that human doesn’t know what he’s doing, or worse, if he knows very little about what he’s doing, it’s functionally in the loop. Either there are no humans in it, or worse, human relationships are causing problems,” added Anthony Boese, interagency program manager at the Veterans Affairs (VA) National AI Lab.

Boese added that he fears that if human-in-the-loop approaches to AI technology become too consistent or rudimentary, they will find ways to circumvent human-in-the-loop. I was. Therefore, he encouraged government agencies to build systems that are accessible to all.

“When I build systems, I try to make sure that humans have the opportunity to walk around and tap any part of the system when they need to. It’s like being in a barrel. The distillery is sampled as it’s ready,” he said. “Complete access by informed individuals is the best way.”

According to Chakib Chraibi, chief data scientist for the Department of Commerce’s National Technical Information Service, government agencies need best practices to continue building trust in human-in-the-loop approaches.

“Humans in the loop are very important, but we need to develop some best practices beyond existing ones,” said Chraibi.

“They give us a good baseline, but they’re clearly not good enough because they don’t solve the problem of discriminative effects in applications,” he added. It requires a concerted effort and best practices.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *