There is no such thing as pausing AI research

Applications of AI


In March, an open letter from the Future of Life Institute called for a moratorium on AI research on larger models than GPT-4, which underpin tools such as ChatGPT. The signatories, which included prominent figures in tech and academia, stipulated that the government should step in and impose a moratorium if it could not be enacted immediately.

Some believe that the letter was simply a ruse to attract attention and possibly even interest in the venture. Whatever the intent, the result has been more confusion around the topic of AI safety. As a matter of fact, the request for such a pause is questionable, and the pause itself is impractical.

Yes, we need to calmly discuss the realities and risks of artificial intelligence. But still, let’s pause this case for a second and see why it could end up being inconsequential or even counterproductive.

Concerns about OpenAI and its current dominance in the AI ​​space are justified. When GPT-4 was released in his March of 2023, much of the transparency the researchers had hoped for fell short. OpenAI has chosen not to publicly disclose details about its dataset, methodology, architecture, or even the size of his GPT-4, citing security and competitive advantage concerns. However, GPT-4 would not have been possible without many prior discoveries and innovations shared publicly by researchers in the field.

The letter called for transparency, but not rescinding OpenAI’s decision to keep details confidential and pausing the creation of more powerful models would put us in the same darkness six months later as we are now. It will be

The letter specifically addressed malicious use of language models, including their potential to generate disinformation. In January 2023, I published a study on exactly this subject, and the GPT-3 scale model is already being used to create content designed for malicious purposes such as phishing, fake news, fraud, and online harassment. I concluded that I could. Therefore, pausing the creation of GPT-5 does not prevent its exploitation.

Another potential reason for the hiatus comes from fears that robots have attained true intelligence, causing fear in Skynet and other dystopian sci-fi works. A paper published by Microsoft titled Artificial General Intelligence Sparks: Early Experiments with GPT-4 describes a number of experiments demonstrating new properties in models that are considered a step towards machine intelligence.

These experiments were performed on an internal version of the GPT-4 model. That model had not undergone so-called fine-tuning, the process of training a model to be safer and more accurate. However, the researchers found that the final model available to the public could not replicate all the experiments described in the paper. It seems that the fine-tuning process somehow breaks the model and exacerbates the situation for applications where creativity and intelligence are required.

But again, these facts are difficult to determine without access to the original model. We may be close to the birth of true artificial general intelligence, but we don’t know because only OpenAI has access to this much more powerful model.

The letter also fails to acknowledge the AI-related problems we already face.

Machine learning systems are already harming society, but little has been done to address those problems. Recommendation algorithms that power social networks are known to drive people to extremism. Algorithmic discrimination and predictive policing also raises many obvious questions. How can we start solving long-term AI-related problems when we can’t even face the real-world problems we’re working on today?

The letter specifically states that “powerful AI systems should only be developed where there is confidence that their effects will be positive and the risks manageable.” One way to do this is called “reconciliation,” and the process is most easily described as creating an artificial conscience.

But given that humans can’t always agree on their own values, if we rely on humans to agree on what ethical values ​​AI should have, we’re in big trouble. will be faced with And this effort is essentially what some of the groups that signed this letter want us to invest in. It is to prevent robots from harming humans by conveying author Isaac Asimov’s Three Laws of Robotics. If you understand AI, you will understand why this is not possible.

But we all know that innovation doesn’t stop, even if some advocate athletic warfare against the scientists building GPU clusters. Even if the whole world agreed to halt all AI research through the threat of force, technology would still advance. Computers will eventually be powerful enough to allow ordinary people to create artificial general intelligence in their own garages. And while I’m equally concerned about bad actors creating evil AI, the suspension of GPT-5 doesn’t affect that possibility in any way. plug. In fact, more research into alignment could provide additional tips and tricks for those trying to create evil AI.

There is also good reason for optimism about superintelligence. Sure, an evil AI that kills or turns all humans into batteries would be a great plot for a movie, but it’s not necessary.

Consider that the universe is over 13 billion years old and there are probably an uncountable number of habitable planets. Perhaps many alien civilizations may have already reached the point where we are today and pondered the safety of AI in much the same way that we do today. If artificial superintelligence inevitably leads to the extinction of the host species and then exponential spread throughout the universe, shouldn’t we be dead already?

I asked GPT-4 to present a theory on this conundrum.

Apart from other obvious explanations, such as the fact that the distance between stars and galaxies is so great that false superintelligence may not have reached us yet, GPT-4 offers another interesting proposition. did We hypothesize that the extinction of our species will ultimately be caused by warfare between factions of humanity vying for the safety of AI.

Andy Patel is a researcher at WithSecure Intelligence. He specializes in prompt engineering, reinforcement learning, swarm intelligence, NLP, genetic algorithms, artificial life, AI ethics, and graph analytics.

Copyright 2023 Nexstar Media Inc. All rights reserved. You may not publish, broadcast, rewrite or redistribute this material.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *