Congress really wants to regulate AI, but no one seems to know how

AI For Business


In February 2019, a little-known artificial intelligence company, OpenAI, announced that its large-scale language model text generator, GPT-2, would not be available to the public “due to concerns about malicious applications of this technology.” announced. The company said that among the dangers could be misleading news stories, online impersonation, fraudulent or counterfeiting of social media content, and automated creation of spam and phishing content. As a result, Open AI said, “Governments should consider expanding or initiating efforts to more systematically monitor the social impact and diffusion of AI technologies and measure progress in the capabilities of such systems.” Yes,’ he suggested.

Four years after that warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and Law met this week to discuss “Surveillance of AI: Rules for Artificial Intelligence.” As is the case with other technology hearings on the Hill, this hearing came after a new technology with the ability to fundamentally change our social and political lives was already in circulation. Like many Americans, in his March when OpenAI released its latest and most sophisticated version of its text generator, GPT-4, lawmakers also saw the pitfalls of large-scale language model artificial intelligence. became concerned about At the same time, the company added his GPT to its chatbot, which it launched in November, and used it to answer questions in a conversational fashion, although GPT’s tendency to hoax doesn’t necessarily warrant it. It does not mean.

Despite this volatility, ChatGPT became the fastest growing consumer application in history within two months, reaching 100 million monthly users by the beginning of the year. Monthly page visits exceed his 1 billion. OpenAI also released Darui, an image generator that creates original images from descriptive verbal prompts. Similar to GPT, Darui and other text-to-image tools can blur the line between reality and invention, a possibility that increases our susceptibility to deception. Recently, the Republican Party released its first fully AI-generated attack ad. It displays what appears to be an actual dystopian image of the second term of the Biden administration.

Three experts attended the Senate hearings. OpenAI CEO Sam Altman. Christina Montgomery, Chief Privacy and Trust Officer at IBM. And Gary Marcus, professor emeritus at New York University and AI entrepreneur. But it was Altman who got the most attention. Here are some of the tech industry’s hottest products: companies that have the potential to transform the way business is done, the way students learn, the way art is made, and the way humans and machines interact. There was a top and what he said to the senators was this. “OpenAI believes regulation of AI is essential,” he said in prepared testimony. We are eager to help policy makers decide how to promote it.”

Illinois Senator Dick Durbin called the hearing “historic.” Because I can’t remember when executives came to Congress and “begged” them to regulate their products. But in fact, this wasn’t the first time a tech CEO has attended a public hearing. He has sat in congressional hearings and called for more regulation. Most notably, in 2018, following the Cambridge Analytica scandal, Facebook allowed a pro-Trump political consultancy to access the personal information of nearly 90 million users without their knowledge. That’s what Facebook CEO Mark Zuckerberg said. Several of the same senators said he was open to further government oversight, and he reiterated this position in an article in The Washington the following year. director“I think governments and regulators need a more active role.” (At the same time, Facebook was paying lobbyists millions of dollars a year to evade government regulation.)

Like Zuckerberg, Altman explained the guardrails his company already employs, such as training a model to reject certain “anti-social” queries, before appealing for more regulation. (Similar to the question I asked ChatGPT recently). This is for writing code to 3D print his Glock. (However, I did script the 3D printed pachinko machine. Before sending the code out, I would like to stress that the creation and use of this device should be done responsibly and legally.) ) OpenAI’s usage policy also states that people can use its models to create malware, generate child sexual abuse imagery, plagiarize, create political campaign materials, etc. Banned, but it’s not clear how the company intends to enforce the policy. “If we find that your product or usage does not comply with these policies, we may ask you to make the necessary changes,” the policy states, and in many cases OpenAI will do its best to prevent violations. It basically allows you to act after a violation has occurred, rather than. that.

In his opening statement to the hearing, the chairman of the subcommittee, Senator Richard Blumenthal (Connecticut), made a stern remark. “AI companies should be required to test their systems, disclose known risks, and allow access for independent researchers,” he said. And he added, “If AI companies and their customers cause damage, they should be held accountable.” Blumenthal had introduced his statements using recordings of himself speaking about the need for regulation to substantiate his claims about harm, but they were not what he actually uttered. It was a meaningless word. Both “his” voice and “his” utterances were made by artificial intelligence. The effect was appalling, especially for the politicians present.

Understanding how to assess harm and determine liability is like understanding how to regulate technology that advances too quickly and unintentionally destroys everything in its path. may be as difficult as In his testimony, Mr. Altman floated the idea of ​​creating a new government agency in Congress tasked with licensing what he called “strong” AI models (although what the term actually means). is defined as ). While this doesn’t seem like a bad idea at first glance, it can be a selfish idea. Clem DeLang, his CEO of AI startup Hugging Face, said: tweeted, “When you need a license to train a model . The company will be well ahead of its competitors, solidifying its position as a latecomer trailblazer and curbing new language models at the same time. entrants to the field.

If this happens, it will not only give companies such as OpenAI and Microsoft (who use GPT-4 in many of their products, including the Bing search engine) an economic edge, but it will also allow the free flow of information and ideas. it may be damaged. Professor and AI entrepreneur Gary Marcus told senators, “There is a real risk of some sort of technocracy combined with oligarchy, where a small number of companies will influence people’s beliefs.” “It’s done with data that we don’t even know,” he said. to know about He argues that OpenAI and other companies have kept secrets about what data their large language models were trained on, so they can determine their inherent biases or truly test their safety. He was referring to the fact that it has become impossible to evaluate

Missouri Senator Josh Hawley said the most immediate danger of LLMs such as ChatGPT is their ability to manipulate voters. “That’s one of the areas I’m most concerned about,” Altman told him. “These models have the more general ability to manipulate, persuade, and deliver a sort of interactive one-on-one disinformation,” he said. Things are getting better, but I think this is an important concern. ”

The most appropriate way to quell this concern is to keep its own LLM until OpenAI takes the lead and is no longer capable of manipulating voters, spreading misinformation, or otherwise undermining the democratic process. will be taken off the market. In the words of Senator Durbin, it would be truly “historic.” However, it was not offered at the hearing. Instead, much of the discussion focused on what regulatory bodies, if any, would be created, who should fill those roles, and whether it would be possible to internationalize such bodies. . It was a fascinating future exercise, ignoring the dangers of the present. Senator Blumenthal told his colleagues: “Congress has a choice now. We had the same choice when faced with social media. We just couldn’t capture the moment.” In a technology-driven world, it doesn’t take artificial intelligence’s predictive powers to realize that lawmakers, despite their curiosity and bipartisan politeness, are missing this moment. ♦





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *