AI is an example of the ‘free-riding’ problem – this is why regulation refers

AI News



Tim Jubsik Clemson University

On March 22, 2023, thousands of researchers and technology leaders, including Elon Musk and Apple co-founder Steve Wozniak, released an open letter calling for the artificial intelligence race to slow down. Specifically, the most sophisticated generation of language-generating AI systems today, he recommends that the lab pause training for at least six months on OpenAI’s more powerful technology than his GPT-4. .

Sounding the alarm about the risks posed by AI is nothing new. Researchers have been warning about the risks of superintelligent machines for decades. There is still no consensus on the feasibility of creating autonomous artificial intelligence systems that rival or even exceed humans in most economically valuable tasks. But it is clear that current AI systems already pose many dangers. For example, from racial bias in facial recognition technology to the increased threat of misinformation and student cheating.

The letter calls for industry and policymakers to work together, but there is currently no mechanism to enforce such a moratorium. As a philosopher who studies the ethics of technology, I found his AI research to be a prime example of the “free-riding problem.” This should guide how society responds to its risks, and I would argue that goodwill alone is not enough.

Can I ride for free? Hulton-Deutsch Collection/CORBIS/Corbis via Getty Images

unlimited rides

Freeriding is a common result of what philosophers call “the problem of collective action.” These are situations where, as a group, everyone benefits from a particular behavior, but individually each member benefits from not doing it.

Such problems most commonly involve public goods. For example, city residents have a collective interest in funding a subway system. This requires each resident to pay a small amount through taxes and fares. Everyone can benefit, but it is in each individual’s best interest to save money and avoid paying their fair share. After all, they can still enjoy the subway if most other people pay.

Hence, there is the problem of “free riding”. Some individuals don’t offer their fair share but still get a “free ride”. Literally for the subway. But if everyone fails to pay, no one benefits.

Philosophers tend to argue that “free riding” is unethical. Many philosophers also argue that freeriders fail to fulfill their responsibilities as part of the social contract—the collectively agreed principles of cooperation that govern society. In other words, they are failing in their duty to contribute as members of society.

Pause or continue?

Like subways, AI is a public good and could complete tasks much more efficiently than human operators. It could potentially complete any task, from analyzing medical data to diagnose patients, to taking on high-risk jobs in the military, to improving mining safety.

But both its benefits and dangers affect everyone, even those who have not personally used AI. Everyone has an interest in industry research being conducted carefully, safely, with proper oversight and transparency to mitigate the risks of AI. For example, misinformation and fake news already pose a serious threat to democracies, but AI could exacerbate the problem by spreading “fake news” faster and more effectively than humans can.

A man in a dark green shirt holds up a mobile phone in the foreground and speaks into a television camera.
A phone screen with a statement from Meta’s head of security policy warning about a fake video of Ukrainian President Volodymyr Zelensky. Olivier d’Uriery/AFP via Getty Images

But even if some tech companies voluntarily stop experimenting, others have financial interests in continuing their own AI research, giving them an edge in the AI ​​arms race. Moreover, by voluntarily pausing AI experimentation, other companies can take a free ride by sharing the benefits of safer and more transparent AI development with society at large.

OpenAI CEO Sam Altman has admitted that the company fears the risks posed by chatbot system ChatGPT. “Here’s a word of caution. ‘I think people should be happy that we’re a little scared of this.'”

In a letter issued on April 5, 2023, OpenAI believes powerful AI systems need regulation to ensure thorough safety evaluations, stating that “such regulation should be taken into consideration. We will actively engage with the government on the best way to get it.” Nonetheless, OpenAI continues his GPT-4 gradual rollout, and the rest of the industry continues to develop and train advanced AI.

The time is ripe for regulation

Decades of social science research on collective behavior issues show that when trust and goodwill are insufficient to avoid free riding, regulation is often the only option. Voluntary compliance is a key factor in creating a free-riding scenario, and government action is sometimes a way to nip it in the bud.

Moreover, such regulations must be enforceable. After all, subway passengers may be less likely to pay fares unless there is a threat of punishment.

Consider one of the most dramatic free-riding problems in the world today: climate change. As a planet, we all have a great interest in maintaining a habitable environment. But with a free-riding system, there is little incentive for any country to actually follow greener guidelines.

Currently, the most comprehensive global agreement on climate change, the Paris Agreement, is voluntary and the United Nations does not have the means to enforce it. For example, even if the European Union and China voluntarily limit their emissions, the United States and India can continue to emit carbon and “free ride” on carbon reductions.

Three men in suits and a woman in a blue blazer raise their hands in triumph.
World leaders celebrate the adoption of a historic global warming agreement at the United Nations’ COP21 climate change conference in 2015.François Guillot/via AFP Getty Images

global challenge

Similarly, the free-riding issue underpins the argument for regulating AI development. In fact, climate change is particularly closely analogous, as neither the risks posed by AI nor the greenhouse gas emissions are confined to the country of origin of the program.

Moreover, the race to develop more advanced AI is international. Even if the US introduces federal regulations on AI R&D, China and Japan are free to ride through and continue their domestic AI programs.

Effective regulation and enforcement of AI, like climate change, requires global collective action and cooperation. In the US, strict enforcement requires federal oversight of research and the ability to impose hefty fines or halt non-compliant AI experiments to ensure compliant development. will be and criminal charges.

However, free riding occurs without enforcement. Free riding means the AI ​​threat won’t abate anytime soon.


Tim Juvshik, Visiting Assistant Professor of Philosophy, Clemson University

This article is republished from The Conversation under a Creative Commons license. Please read the original article.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *