Policymakers and regulators need to combine several different elements to create a healthy environment for AI innovation. One is proportionality. “Even if something goes wrong, the consequences may not be all that important. In the case of electricity, the consequences can be very serious. A power outage can be deadly,” said Jonathan Thirlwell, head of emerging technologies at British energy regulator Ofgem.
Having the infrastructure in place is also part of responsible AI. Dr. Andrew Richards, Director of Research Computing Services at Imperial, explained how the university is reimagining high-performance computing to support AI in a more sustainable way. “We’ve decomposed our computer infrastructure so it can deliver more computing power while using less energy for cooling. This saves us money and allows us to spend more time computing.”
But again, it’s not just technical performance that matters in policy, but human factors as well. “Are there the right competencies within the organization to help people understand how to effectively deploy the tools?” Thirlwell asked.
Professor Alessandra Russo, co-director of Imperial’s School of Convergence Sciences, warned that standardizing on “good enough” automated decisions should be avoided in areas where accuracy and human judgment are still a concern. “The big risk that keeps me awake is that by tacitly incorporating this technology into society, our social values could change. This is a big problem and risk to me. We are facing what is wrong with our social networks, and there is no way back.”
