Top AI Companies Get Serious About AI Safety, Growing Concern About Risks Of ‘Very Bad’ AI

AI For Business

Hello. Welcome to his May special monthly issue of Eye on AI

Details from Fortune: 5 Side Jobs That Could Earn $20,000+ A Year While Working From Home Want to make some extra cash? Currently, this CD has an APY of 5.15%.Do you want to buy a house? Your savings are: How much money do you need to make in a year to comfortably buy a $600,000 house?

The idea that increasingly sophisticated and versatile artificial intelligence software could pose extreme risks, including the extinction of the entire human race, is debatable. Many AI experts believe that such risks are far-fetched, and the dangers are far too negligible to be considered. Among these same people, the emphasis on survival risks by many prominent technologists, including many who are working on building advanced AI systems, hype the capabilities of current AI systems. , is seen by some as a cynical ploy intended to divert the attention of regulators and regulators. Protect the public from the real and tangible risks that already exist in today’s AI software.

And let me be clear, these real-world evils are many and serious. That includes reinforcing and amplifying existing institutional and social biases, including racism and sexism, and the AI ​​software development cycle, which often relies on data obtained without consent. Or copyright, the use of underpaid contractors to label data in developing countries, and the root of how AI software is created and what its strengths and weaknesses are. It’s about a lack of transparency. Other risks include the high carbon footprint of many of today’s generative AI models, and the trend for companies to use automation as a way to cut jobs and lower employee wages.

However, even with that said, concerns over survival risks are becoming more than negligible. A 2022 survey of researchers working on the cutting edge of AI technology at some of the most prominent AI labs found that nearly half of researchers now believe AI impacts could have a “very bad” impact. It became clear that more than 10% of the Human extinction. (It’s worth noting that a quarter of his researchers still thought there was zero chance of this happening.) Deep learning pioneer Jeff Hinton recently said that Google I have resigned from my position at , allowing me to speak more freely about the issues I think about. He believes that models such as GPT-4 and PALM 2 will change the way we think about the dangers of increasingly powerful AI, and that we could run into dangerous superintelligence inventions at any time in the next 20 years. says.

There are some signs that a grassroots movement is growing around concerns about the survival risks of AI. Some students picked up on OpenAI CEO Sam Altman’s talk at University College London earlier this week. They urge OpenAI to abandon its pursuit of general artificial intelligence (the kind of general-purpose AI that can perform any cognitive task just like humans) until scientists discover how to ensure the safety of such systems. I was looking for Protesters say Altman himself warned that downside risks from AGI could mean “lights out for all of us,” yet he continues to pursue increasingly advanced AI. He pointed out that it was particularly funny. His Google DeepMind for the past week.

I don’t know who is here. But if there is a non-zero chance of human extinction or other serious negative consequences from advanced AI, I think it’s worth at least a few smart people thinking about how to prevent it. . It is interesting to see that some of the top AI labs have started collaborating on frameworks and protocols for AI safety. Yesterday, a group of researchers from Google DeepMind, OpenAI, Anthropic, and several non-profit think tanks and organizations interested in AI safety published a paper detailing one possible framework and testing regime. bottom. This document is important because the ideas it contains can form the basis for industry-wide efforts and guide regulators. This is especially true as national or international institutions are established specifically to manage the underlying models, the multi-purpose AI systems that underpin the generative AI boom. OpenAI’s Altman, like other AI experts, has called for such a body, and Microsoft is also pushing hard to endorse the idea this week.

“If you’re going to have some kind of safety standard governing ‘is it safe to put this AI system in? What can it do? What can’t it do?

In their paper, the researchers called for testing not only by companies and research institutes developing advanced AI, but also by outside independent auditors and risk assessors. “There are many advantages to having an external party perform the assessment in addition to internal staff,” says Chevrane, citing the scrutiny of accountability and safety claims by modelers. Researchers say internal safety processes may be sufficient to manage the training of powerful AI models, but regulators, other laboratories and the entire scientific community are informed of the results of these internal risk assessments. suggested that it should. External experts and auditors should then take on the role of evaluating and testing the model’s safety before it is released to the public, and the results will be used by regulators, other laboratories, and the broader science. The community will also be notified. Finally, once the model is deployed, we will use systems to flag and report events of concern similar to systems currently used to detect ‘adverse events’ in drugs approved for use. must continuously monitor the model.

Researchers identified nine AI features that could pose significant risks and which models should be evaluated. Some of these, such as the ability to carry out cyberattacks, trick people into believing false information, or make them think they are interacting with a human rather than a machine, are fundamental to the existing It already fits the scale language model. Today’s models include other areas that researchers have identified as concerns, such as the ability to persuade and manipulate people into taking specific actions, and the ability to engage in long-term planning, including setting sub-goals. But it does have some nascent features. Other dangerous abilities the researchers highlighted include the ability to plan and execute political strategies, access weapons, and build other AI systems. Finally, they warned that AI systems may develop situational awareness (including the ability to understand the situation under test and deceive raters) and the ability to self-perpetuate and self-replicate.

Researchers say careful security measures should be taken when training and testing powerful AI systems. training and testing AI models in isolated environments, some of which may not have the ability for the model to interact with a wider computer network, or the ability to access other software tools may have been carefully ensured It also includes the possibility of monitored and controlled. The paper also says labs need to develop ways to quickly cut off models’ access to networks and shut down models if they start exhibiting worrisome behavior.

In many ways, this paper is less interested in these details than what its very existence says about communication and coordination among cutting-edge AI labs on common standards for the responsible development of technology. increase. Competitive pressure makes it increasingly difficult to share information about the models these tech companies release. (OpenAI famously refused to release even basic information about GPT-4, mainly for competitive reasons, and Google too has been skeptical about exactly how to build state-of-the-art AI models. It’s going to be less open in the future, he said.) In this environment, it’s good to see tech companies still banding together to develop common standards for AI safety. It remains to be seen how easily these adjustments can continue without a government-led process. Existing laws can also make it even more difficult. In a white paper released earlier this week, Google’s president of international affairs, Kent Walker, called for a provision that would give tech companies a safe haven to discuss AI safety without violating antitrust laws. rice field. That’s probably a smart move.

Of course, the wisest thing might be to abandon efforts to develop more powerful AI systems until companies actually understand enough of the controls to ensure they can follow protesters’ advice and develop them safely. I can’t. But having a shared framework for considering extreme risks and some standard safety protocols is better than continuing to race headlong into the future without them.

Now for some of the AI ​​news from last week.

Jeremy Khan

This article originally appeared on

Details from Fortune:
5 Side Jobs That Could Earn $20,000+ A Year While Working From Home
Want to earn extra cash?APY for this CD is currently 5.15%
Buy a house?See how much you can save here
This is how much you need to earn per year to comfortably buy a $600,000 house

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *