It’s time for regulators to crack down on AI with ChatGPT and GPT-4

AI For Business

With new AI systems coming before us at breakneck speed, it may seem that there is nothing we can do to stop them long enough to make sure they are safe. not.

But it’s not. There are concrete things regulators can do now to prevent tech companies from releasing dangerous systems.

A new report from the AI ​​Now Institute, a research center studying the social impact of artificial intelligence, provides a roadmap that specifies exactly the steps policymakers should take. Thanks to the author’s Amba Kak and Sarah Myers West’s experience in government, it’s refreshingly practical and practical. Both former advisers to Federal Trade Commission Chairman Rina Khan are focused on what regulators can realistically do today.

The big argument is that if we want to curb the harm of AI, we need to curb the concentration of power in Big Tech.

Building state-of-the-art AI systems requires resources, such as vast amounts of data and enormous computing power, that few companies currently have. These companies raise millions of dollars to lobby governments. It has also become “too big to fail” and even governments are becoming dependent on their services.

So it creates a situation where a few companies can set conditions for everyone. Those companies can build highly significant AI systems and release them how and when they want, with little accountability.

“A handful of private actors have gained power and resources comparable to nation-states while developing and diffusing artificial intelligence as a critical social infrastructure,” the report notes.

What the author emphasizes is the invisible absurdity of how much power has been unwittingly ceded to a minority of non-democratically elected actors.

Given the risks of systems such as ChatGPT and GPT-4-powered Bing (such as the risk of spreading disinformation that could undermine democratic societies), companies like OpenAI and Microsoft have decided to use these systems at their own discretion. It’s amazing that we were able to release it in . For example, OpenAI’s mission is to “ensure that artificial general intelligence benefits humanity as a whole”, but so far businesses, rather than the general public, have been criticized for what it entails to benefit humanity as a whole. is now defined.

The report says the time has passed to take power back from corporations and recommends several strategies for doing so. Let’s break them down.

Concrete strategies for controlling AI

One of the absurdities of the current situation is that when AI systems cause harm, researchers, investigative journalists, and the general public document that harm and demand change. But that means society is always carrying a heavy burden and desperate to catch up after the fact.

Therefore, the report’s top recommendation is to put the burden on companies themselves to create policies that prove they do no harm. Just as pharmaceutical companies have to prove to the FDA that new drugs are safe enough to hit the market, tech companies have to prove their AI systems are safe before they go on sale. not.

This is a significant improvement on existing efforts to improve the AI ​​landscape. For example, in the burgeoning “auditing” industry, third-party evaluators are looking under the hood to ensure transparency into how algorithmic systems work and eradicate bias and safety issues. This is a good step, but the report suggests that ‘bias’ is a purely technical problem and a purely technical solution, and should not be a major policy response. says no.

But bias also has to do with how AI is used in the real world. Get face recognition. “It is not social progress to make black people look equal to software that will inevitably be further weaponized against us,” Zoé Samudzi pointed out in 2019.

Once again, this report reminds us of things that should be obvious but are often overlooked. Instead of taking the AI ​​tool for granted and wondering how we can make it fairer, we should start by asking whether this AI tool needs to exist. In some cases, the answer is no and the correct response is a moratorium or ban, not an audit. For example, pseudo-scientific “emotion recognition” or “algorithmic gaydar” techniques should not be introduced.

The tech industry is agile and often switches tactics to suit its goals. Sometimes they go from resisting regulation to arguing they support it, as we saw when faced with the chorus calling for a ban on facial recognition. supported the movement. They provided for auditing the technology, which is a much weaker position than a complete ban on police use of the technology.

As such, regulators need to keep an eye on such developments and be prepared to pivot if their approach is co-opted or hollowed out by the industry, the report said. increase.

Regulators also need to get creative, including using various tools in their policy toolbox to control AI. Even if these tools are not typically used together.

When people talk about “AI policy”, they sometimes think of it as separate from other policy areas such as data privacy. But “AI” is just a combination of data, algorithms and computational power.So data policy teeth AI policy.

With that in mind, approaches to limiting data collection can be considered not only to protect consumer privacy, but also as a mechanism to mitigate some of the riskiest AI applications. Limiting your data supply limits what you can build.

Likewise, you may not be as comfortable talking about AI as you are about competition and antitrust law. But we have already enacted antitrust laws, and the Biden administration suggests it is prepared to apply these laws boldly and imaginatively to target the concentration of power among AI companies. I’m here.

Ultimately, the biggest hidden truth the report reveals is that humans control what technology is deployed and when. There have been moratoriums and bans on facial recognition technology in recent years. In the past, we have organized moratoriums and created clear prohibitions in the field of human genetics.Technical inevitability is a myth.

“Nothing is inevitable when it comes to artificial intelligence,” says the report. “Only when we stop seeing AI as synonymous with progress will the public be able to control the trajectory of these technologies.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *