‘AI Apocalypse’ Is Just PR

AI For Business


On Tuesday morning, artificial intelligence vendors once again warned about the viability of their products. Hundreds of AI executives, researchers, and other prominent figures in technology and business, including OpenAI CEO Sam Altman and Bill Gates, signed a one-sentence statement written by the Center for AI Safety, vowing: Reducing the risk of extinction should be done on a global scale.” It is a priority alongside other societal-scale risks such as pandemics and nuclear war. ”

The 22 words come after a weeks-long tour of executives from OpenAI, Microsoft, Google and other tech companies calling for limited regulation of AI. They have spoken to parliament, the European Union and elsewhere about the need for industry and governments to work together to curb the harm of their products, even as companies continue to invest billions in the technology. . Several prominent AI researchers and commentators have been skeptical of this rhetoric and have told me that Big Tech’s proposed regulations seem blasphemous and selfish.

Silicon Valley has given little consideration to years of research that proves AI’s harm is real, not speculative. After the launch of OpenAI’s ChatGPT and a round of funding, there seems to be a lot of interest in finally seemingly focused on safety. “This seems like a very sophisticated piece of PR from a company that is pushing hard to build the very technology that the team warns of as a risk to humanity,” says Surveillance Technology Surveillance Project. Albert Fox, executive director of the Surveillance Technology Surveillance Project, a non-profit organization that does, said Khan. He told me he was against mass surveillance.

The implicit assumption underlying the ‘extinction’ scare is that AI is destined for terrifying capabilities, turning the work of these companies into something of an apocalypse. “The product appears to be more powerful, so powerful that it could obliterate humanity,” said Emily Bender, a computational linguist at the University of Washington. This assumption provides implicit advertising. Like demigods, CEOs wield technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. Not investing is a fool. It’s also a stance intended to mimic previous crisis communications by tobacco companies, oil tycoons and Facebook, and to be vaccinated against criticism. Hey, don’t get mad at me. We begged them to regulate our product.

But the supposed AI apocalypse is still science fiction. Meredith Whittaker, co-founder of the AI ​​Now Institute and president of Signal, said, “We use fanciful, adrenal-stimulating techniques to steal attention from what problems regulation should solve.” Ghost stories are being used,” he said. Programs such as GPT-4 have improved from previous versions, but only incrementally. AI has the potential to change important aspects of everyday life. Perhaps medicine has advanced and has already replaced jobs, but there is no reason to believe that what Microsoft, Google, and others are offering will lead to the end of civilization. “It’s just more data and more parameters. What’s not happening is a fundamental shift in how these systems work,” Whitaker said.

Two weeks before signing the AI ​​Extinction Warning Letter, Altman, who compared his company to the Manhattan Project and himself to Robert Oppenheimer, submitted to Congress a toned down version of the extermination statement’s predictions. Potentially dangerous because it improves rapidly. “Governmental regulatory interventions will be critical to de-risking increasingly powerful models,” he said before a Senate committee. Both Mr. Altman and his senators treated the rise in power as inevitable and the risks associated with it as “potential downsides” that had not yet been realized.

But many of the experts I spoke to were skeptical about how far AI could advance from its current capabilities, and were adamant that it didn’t need to advance at all to hurt people. applications already do. Fragmentation, therefore, is not whether AI is harmful, but which harm is most concerning: future AI catastrophes that only the designers of AI have warned about and claim they can avoid on their own. , or the more routine violence perpetrated by governments, researchers and the general public? They have long lived with who is at risk and what is the best way to prevent the damage and fight it.

For example, consider the reality that many existing AI products are discriminatory. Racist and gender-unfair facial recognition, biased medical diagnoses, and sexist recruitment algorithms are among the most well-known examples. Khan argues that we should assume that AI is biased until proven otherwise. In addition, advanced models are regularly accused of copyright infringement for datasets and labor violations for manufacturing. Synthetic media floods the internet with financial fraud and non-consensual pornography. “Sci-fi narratives” about AI, such as those advocated by the Declaration of Extinction, “distract us from the tractable areas we can start working on today,” said her Mozilla fellow, who studies algorithms and his biases. Deborah Raji told me. And while the damage done by today’s algorithms is primarily damaging marginalized communities and is therefore easy to ignore, the supposed collapse of civilization will also hurt the privileged classes. “When Sam Altman says something, people listen, even if it’s too far removed from the real-life situation where harm is actually being done,” Raj said.

Even if one hears it, the word may seem hollow. Just days after Altman’s Senate testimony, he told reporters in London that his company could “go out of business” on the continent if the EU’s new AI regulations were too stringent. . The apparent shift in policy drew backlash, with Altman tweeting that OpenAI “has no plans to exit” Europe. “Some of the real wise regulation seems to threaten the business model,” said Bender of the University of Washington. In an emailed response to a request for comment on Altman’s remarks and the company’s stance on regulation, an OpenAI spokesperson said, “To achieve our mission, we mitigate both current and long-term risks. We need to try to do that,” he said, adding that the company was “cooperating.” “Working with policy makers, researchers and users to do so.”

The guise of regulation is established as part of Silicon Valley’s strategy. In 2018, after Facebook was rocked by misinformation and privacy scandals, Mark Zuckerberg told Congress that the company “does not just build tools, it makes sure they are put to good use. We are responsible,” he said, adding, “We welcome what is right.” regulations. “Since then, Meta’s platforms have failed miserably at limiting election and pandemic misinformation. We need to establish clear and consistent regulatory guidelines,” he said. By the end of the year, he had his own cryptocurrency company turned out to be a fake and was arrested on financial fraud charges on the same scale as the Enron scandal. “We are seeing a very smart attempt to avoid being lumped together with technology platforms like Facebook and Twitter, which are subject to increased scrutiny from regulators for the harm they cause. We’re getting more and more,” Khan told me.

At least some of the signatories to the Declaration of Extermination appear to seriously believe that superintelligent machines could destroy humanity. Joshua Bengio, who signed the statement and is sometimes called the “godfather” of AI, said the technology had become so powerful that it could be used as a rogue sentient being, or even if it fell into the hands of humans, the world would He said he believed there was a danger of triggering a catastrophe that would end the human. “If it’s an existential threat, you may only get one chance and that’s it,” he said.

Dan Hendricks, director of the AI ​​Safety Center, told me he thinks about these risks as well. He also added that the public needs to end the current “AI arms race between these companies that basically prioritizes the development of AI technology over safety.” The fact that leaders from Google, Microsoft, OpenAI, Deepmind, Anthropic, and Stability AI signed the Center’s warning could be a sign of genuine concern, Hendricks said. Altman wrote about this threat even before the founding of OpenAI. But “even under that philanthropic interpretation,” Bender told me. If you think this is so dangerous why are you still building it?

The solutions proposed by these companies to the empirical and fantasy harm of their products are vague and diverge from the established body of research on what the regulation of AI actually needs, experts told me. It’s full of clichés. In his testimony, Altman emphasized the need to create a new government agency focused on AI. Microsoft is doing something similar. “It’s leftovers warmed up,” Signal’s Whitaker said. “We were having a conversation in 2015 and the topic was ‘Do we need a new agency? It’s an old ship. And a new institution, or any exploratory policy initiative, “is a very long-term goal that will take decades to come closer to realization,” Raj said. In that time, AI has not only harmed countless people, it has become deeply entrenched in a wide variety of companies and institutions, making meaningful regulation much more difficult.

For nearly a decade, experts have rigorously studied AI damage and suggested more realistic ways to prevent it. Possible interventions could include training data and public documentation of model design. A clear mechanism to hold companies accountable if their products post medical misinformation, defamation, or other harmful content. antitrust law. Or simply enforce existing laws related to civil rights, intellectual property and consumer protection. “If a store systematically targets black customers through human decision-making, it’s a civil rights violation,” Khan said. “And for me, it doesn’t change when algorithms do it.” Current law should apply when it comes to training in written texts and deceiving people out of money.

Whitaker said the doomsday predictions and calls for new AI agencies amounted to “attempts to disrupt regulation.” Because the very people selling and profiting from this technology will “shape, hollow out, and effectively sabotage” AI agencies and their mandates. Just look at Altman’s congressional testimony and the recent “responsible” AI conference between various CEOs and President Joe Biden. The people who develop and profit from software are telling governments how to approach software. An early glimpse of regulatory capture. . An Internet researcher at the University of California, Los Angeles, Suppression Algorithm, he told me. “And the kind of regulation that I see is [AI companies] What they are talking about is to their advantage. These companies have also spent millions of dollars lobbying Congress in the first three months of the year alone.

ChatGPT is the only one that has changed significantly from the years-long conversation around regulation of AI. The program’s human language has captivated consumers and investors alike, giving Silicon Valley a Promethean aura. But much remains the same about the underlying myth of AI harm. This technology relies on surveillance and data collection, uses creative and manual labor, amplifies prejudice, and is non-sentient. The ideas and tools that regulation needs to address these issues and possibly reduce corporate profits are available to anyone interested. The 22-word warning is a tweet, not scripture. It’s a matter of faith, not evidence. That algorithms are hurting people today would have been true if he had read this sentence 10 years ago, and it still is.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *