Tech leaders in crisis are calling for a pause in AI research.chemistry

AI For Business


An open letter calling for a moratorium on the development of advanced artificial intelligence (AI) systems has divided opinion among researchers. The letter, which is gathering signatures from the likes of Tesla CEO Elon Musk and his Apple co-founder Steve Wozniak, was released earlier this week, calling on AI companies and regulators to help protect society from potential risks. A six-month moratorium has been advocated to give time to formulate safeguards. technology.

AI has come full circle since last year’s launch of the image generator DALL-E 2 from the Microsoft-backed company OpenAI. The company has since released two text-generating chatbots, his ChatGPT and GPT-4, to rave reviews. The ability and speed of adoption of these so-called “generative” models to mimic human output (ChatGPT reported that by January he had reached over 100 million users, and major tech companies competing to incorporate generative AI into their products) has attracted a lot of people. Let your guard down.

“I think many people’s intuitions about the impact of technology are not well suited to the pace and scale of technology. [these] AI models,” said letter signatory Michael Osborne, a machine learning researcher and co-founder of AI company Mind Foundry. He worries about the social impact of new tools, such as their potential to put people out of work and spread misinformation. “I think the six-month moratorium will give regulators enough time to catch up with the rapid pace of progress,” he says.

A letter released by a nonprofit group called the Future of Life Initiative accuses some researchers of invoking far-flung speculative harm. It asks: Should we risk losing control of our civilization? Sandra Wachter, a technical regulation expert at the University of Oxford, says there are a number of known harms that must be addressed today. Wachter, who did not sign the letter, said Wachter needs to focus on how AI systems can become disinformation engines and persuade people with false and potentially libelous information. increase. How they perpetuate systemic biases in the information they surface to people. And how we rely on the invisible labor of laborers who often struggle under dire conditions to label the data and train the system.

Privacy is also an emerging concern, as critics fear the system will be prompted to accurately reproduce personally identifiable information from the training set. Italy’s data protection authority banned ChatGPT on March 31, due to concerns that the Italian’s personal data was being used to train his models for OpenAI. (In an OpenAI blog post, “We removed personal information from our viable training datasets, fine-tuned our model to deny requests for personal information from individuals, and removed personal information from our system in response to requests from individuals. We are working to remove your personal information.”)

Some techs warn of more serious security threats. Florian Tramèr, a computer scientist at ETH Zurich, said his ChatGPT-based digital assistant, who can work with the web and read and write emails, could open up new opportunities for hackers. said. Hackers are already using a technique called “prompt injection” to trick AI models into saying things they shouldn’t say, such as offering advice on how to perform illegal actions. Some methods require the tool to roleplay as an evil best friend or to act as a translator between different languages. This can confuse the model and encourage it to ignore safety limits.

Tramèr worries that the practice could evolve into a way for hackers to trick digital assistants through “indirect prompt injection.” For example, sending someone a calendar invitation that instructs the assistant to export the recipient’s data and send it to the hacker. “These models will be abused left and right to leak people’s personal information or destroy their data,” he says. AI companies need to start warning users about security and privacy risks and do more to address them, he said.

OpenAI seems to be becoming more vigilant about security risks. OpenAI President and Co-Founder Greg Brockman murmured Last month, the company said it was “considering launching a bounty program” for hackers who pointed out weaknesses in its AI system, acknowledging that “the stakes will rise significantly over time.” I was.

But many of the problems inherent in today’s AI models don’t have easy solutions. One thorny issue is how to make AI-generated content identifiable. Some researchers are working on “watermarks” that create imperceptible digital signatures on the output of AI. Others are trying to devise ways to detect patterns that only AI generates. However, recent research shows that tools that subtly rewrite AI-generated text can greatly undermine both approaches. The authors say that as AI becomes more human-sounding, its output will only become harder to detect.

Other elusive safeguards include those that prevent systems from producing violent or pornographic images. According to Tramèr, most researchers just apply post-filters and teach the AI ​​to avoid “bad” outputs. He believes these issues should be fixed at the data level, before training. “Training these generative models he has to curate the set and find a better way to remove sensitive data entirely,” he says.

The suspension itself is unlikely. OpenAI CEO Sam Altman said he did not sign the letter. wall street journal The company has always taken safety seriously and regularly collaborates with industry on safety standards. Microsoft co-founder Bill Gates told Reuters the proposed moratorium would not “solve problems” going forward.

Osborne believes the government needs to intervene. The Biden administration has proposed an AI “bill of rights” designed to help companies develop secure AI systems that protect the rights of U.S. citizens, but the principles are voluntary and binding. there is no. The European Union’s AI law, due to come into force this year, will apply different levels of regulation depending on the level of risk. For example, police systems that aim to predict individual crimes are considered unacceptably dangerous and are banned.

Wachter says the six-month moratorium is arbitrary and wary of banning research. Instead, “we need to think about responsible research and incorporate that kind of thinking very early on,” she says. As part of that, she says companies should invite independent experts to hack and stress-test systems before they are deployed.

She says the people behind the letter are deeply immersed in the world of technology and believe they have a narrow view of the potential risks. “You need to talk to lawyers, people who specialize in ethics, people who understand economics and politics,” she says. “Most importantly, these questions are not for engineers alone.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *