Axios Technology reporter Ashley Gold discusses new warnings from AI sector leaders calling for a global framework for AI regulation. They say the technology is at risk of “extinction” if it is not regulated.
video transcript
Brad Smith: AI seems to be all the rage right now, and companies want to actively participate in the hype. The technology was mentioned on more than 100 earnings calls this quarter.
Leaders in this space are making big bucks. Shares of NVIDIA, C3 AI, and PALANTIR are each up more than 100% so far this year. But the boom in interest and investment has sounded alarm bells for industry veterans.
More than 300 scientists and technology leaders issued a stark warning yesterday. AI poses an extinction risk and addressing it should be a global priority. Signatories include executives from OpenAI, Anthropic, Microsoft, and Google DeepMind.
Axios Technology Reporter Ashley Gold joins us for a deeper dive into the AI regulation drive. Ashley, it’s nice to be here to talk to you this morning. Now, you’re continuing your coverage of this AI space and the reactions and objections from some key executives. What are they concerting about now?
Ashley Gold: Executives are unanimous in the need for such global regulation of AI. They want regulation both at the U.S. federal level and globally, so that there is at least some agreement on the basic principles: guardrails and a framework for what the rules should be for these high-ranking officials. I think. It’s about putting AI systems at risk. This is how we think about ChatGPT, the generative AI we’ve seen explode in the last six months.
Julie Hyman: So it’s really hard to understand the warnings from our leaders that we need to prepare for this issue in the same way we prepare for a pandemic. At the same time, these technologies are also being developed.
Ashley Gold: that’s right.
Julie Hyman: So everything is very conceptual. Does anyone have a plan how to actually prepare?
Ashley Gold: To be honest, I agree with you that this is purely conceptual. It’s very difficult for the average person to read such warnings and think, “Okay, now I know what the risks are.” You should be careful with this. It’s too unfair. These AI leaders believe that advances in generative AI and other types of advanced AI are helping to solve problems already seen in society: inequality, misinformation, and the inequality we see. I think it does a better job of explaining how something like that affects our daily lives. Late-stage capitalism and across the board.
Rather than alluding to this future where AI is everywhere, I think it makes more sense to just talk about how AI affects our current problems and the current online environment. Because people can’t really think about it.
Brad Smith: For Washington regulators, who have already struggled to overcome the learning curve, how well these executives are positioned for everything from the metaverse to social media and even blockchain technology for some regulators. are you doing They don’t always have a deep understanding of how artificial intelligence should be thought of, how its regulation should proceed, and where the productivity gains lie.
– [SNEEZES]
Brad Smith: –Fortunately, we were still able to see some framework within which business and society could operate.
Ashley Gold: Absolutely. So AI leaders know what happened when social media really exploded. And it turns out we have no real rules about social media, no federal privacy laws, no laws about what companies do with your information. And there’s been a lot of backlash, social media companies have gotten into a lot of trouble, and they’ve had a lot of congressional hearings as a result.
AI leaders say this is inevitable. I’m going to fix the problem now. We decide traffic rules. That way you don’t get mad at us later and say we broke the rules, did something illegal, or went too far. We now want to work with you to figure out exactly how to proceed with this. So you guys can embrace what AI is doing and we can all move towards it together, working in a way that’s the opposite of how we’ve been dealing with social issues. The media looked at it after the fact and argued that some sort of rule should have been put in place. Here are some of the lessons we learned. They are trying to avoid it.
Julie Hyman: they are trying. I don’t know how optimistic we are. So the EU has already taken action on this, right? Some legislators have made suggestions. I think Colorado Senator Michael Bennett has some ideas. Tell us about it and if it’s actually gaining momentum.
Ashley Gold: So Michael Bennett proposed creating a body to regulate social media, especially artificial intelligence. He believes this is under the jurisdiction of the current government agency responsible for it. The Federal Trade Commission and the Justice Department have argued that they do not have the necessary resources or highly specialized expertise to regulate AI.
So it’s a noble idea. I think it is difficult to set up a new institution, especially in this political environment. Over the last 20 years or so, you’ve seen what Republicans talk about the CFPB and institutions like it. New agencies cost money. The new agency will allow the government to intervene further in the private market.
So Republicans and Democrats agree, and they do agree, on the need to regulate AI, but how much money can actually be spent doing research and investing, and how governments There will be disagreements about the extent to which . So I think it will be very difficult.
Julie Hyman: oh well, let’s see. I think it’s crossfingers. Ashley Gold, Technology Reporter at Axios. Thank you very much. appreciate.
Ashley Gold: thank you.
