Due to the growing interest in generative artificial intelligence (AI) systems around the world, researchers at the University of Surrey have created software that can verify the amount of information AI has collected from an organization’s digital databases.
Sally’s validation software can be used as part of a company’s online security protocols to help organizations understand if AI has learned too much or accessed sensitive data.
The software can also determine whether AI can identify and exploit defects in software code. For example, in the context of online gaming, coding errors can be exploited to determine if an AI has learned to always win at online poker.
Solofomampionona Fortunat Rajaona, Ph.D., is a Research Fellow at the University of Surrey in charge of Formal Examination of Privacy and the lead author of the paper. He said:
“In many applications, such as self-driving cars on highways and robots in hospitals, AI systems interact with each other and with humans. It’s a problem and I’ve spent years trying to find a working solution for it.
“Our validation software can infer how much an AI can learn from interactions, whether it has enough knowledge to successfully cooperate, and whether it has too much knowledge that violates privacy. Through the ability to validate what we have learned, we can give organizations the confidence to safely unleash the power of AI in a safe setting.”
Sally’s research on software won the Best Paper Award at the 25th International Symposium on Formal Law.
Professor Adrian Hilton, Director of the Institute for Human-Centric AI at the University of Surrey, said:
“Over the past few months, there has been tremendous public and industry interest in generative AI models, fueled by advances in large-scale language models such as ChatGPT. It is essential to underpin safety and security, and this research is an important step in preserving the privacy and integrity of the datasets used in training.”
More information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346