Rival factions in artificial intelligence, from Elon Musk to OpenAI

Machine Learning


A quick guide to deciphering Silicon Valley’s weird but powerful AI subculture

Chart of clippings of the following people, ordered from most
(Illustration by Elena Lacey/Washington Post, Photo by Washington Post, Getty Images, Twitter)

Within Silicon Valley’s AI sector, a bitter divide is growing over the impact of a new wave of artificial intelligence. Some argue that advancing competition is essential, while others argue that the technology poses an existential risk.

Late last month, Elon Musk, along with other tech executives and researchers, wrote an open letter calling for a six-month moratorium on the development of AI that “competes with humans,” citing “serious risks to society and humanity.” These tensions became a central topic when we signed. Self-proclaimed decision theorist Eliezer Yudkowski, co-founder of the non-profit Machine Intelligence Research Institute (MIRI), goes even further: AI development needs to stop worldwide, in a Time magazine op-ed. He writes and calls for American airstrikes against aliens. Data center if required.

The policy world didn’t seem to know how seriously these warnings should be heeded. Asked Tuesday if AI is dangerous, President Biden said, “We still have to wait and see. It could be.”

The bleak vision is familiar to many within Silicon Valley’s isolated AI sector, where a small group of strange but influential subcultures have collided in recent months. One faction believes AI can kill us all. Some have suggested that his six-month hiatus suggested by Musk, who is reportedly starting his own AI lab, was designed to help him catch up.

Subgroups are fairly fluid, even if they appear contradictory or even if insiders disagree on basic definitions.

But these once-parochial worldviews could shape a pivotal debate about AI. Below is a quick guide to deciphering the ideology (and financial incentives) behind factions.

argument: The term “AI safety” was used to refer to real problems, such as keeping self-driving cars from crashing. Sometimes used interchangeably with “AI alignment” in recent years, the term opens up a new area of ​​research for AI systems to follow the intentions of their programmers and prevent power-seeking AI that can harm humans. It is also used to represent Do not turn off.

Many are associated with communities such as Effective Altruism, a philosophical movement to maximize the best in the world. EA began by prioritizing causes such as global poverty, as we know it, but then turned to concerns about the risks posed by advanced AI. Online forums such as Lesswrong.com and the AI ​​Alignment Forum have heated discussions about these issues.

Some proponents also subscribe to a philosophy called long-termism that looks at maximizing profits over millions of years. They cite a thought experiment from Nick Bostrom’s book Superintelligence. In this experiment, we imagine that humans can colonize the stars and create trillions of future peoples by safe superhuman AI. Building safe artificial intelligence is essential to ultimately save lives.

Who is behind it: In recent years, EA-related donors such as Open Philanthropy, founded by Facebook co-founder Dustin Moskovitz and former hedge fund Holden Karnofsky, have contributed to the safety of AI and many AI-focused projects. It has helped sow the seeds of centers, laboratories and community-building efforts. alignment. The FTX Future Fund, started by cryptocurrency executive Sam Bankman-Fried, was another major fund until the company filed for bankruptcy after Bankman-Fried and other executives were indicted on fraud charges. was a good player.

How much influence do they have?: Some work in top AI labs such as OpenAI, DeepMind, and Anthropic. This worldview has led to some useful ways to make AI safer for users.. A tight network of organizations yields research and research that can be shared more widely, including this 2022 study that found 10% of machine learning researchers said AI could end humanity. I’m here.

AI Impacts, which conducted the research, is supported by four different EA-affiliated organizations, including the Future of Life Institute, which hosted Musk’s open letter and received the largest donation from Musk. Center for Humane Technology co-founder Tristan Harris, who once campaigned on the dangers of his media on social, and now he focuses on AI, cites this research prominently.

argument: It’s not that this group doesn’t care about safety.They are very excited to build software that reaches Artificial General Intelligence (AGI) is the term for AI that is as smart and capable as humans. Some are promising tools like GPT-4, and according to OpenAI, having developed the skill to write and respond in a foreign language without being directed means they are on the road to AGI. It means that there is Experts explain that GPT-4 developed these capabilities by ingesting large amounts of data. Most people say these tools don’t understand the meaning behind text like humans do.

Who is behind it?: Two leading AI labs, OpenAI, founded in 2015, and DeepMind, a research lab founded in 2010 and acquired by Google in 2014, list building AGI in their mission statements.wealthy technology investor Interested in the limits of AI. According to Cade Metz’s book Genius Makers, Peter Thiel donated his $1.6 million to his AI nonprofit in Yudkowsky, who introduced Thiel to his DeepMind. Musk invested in his DeepMind and introduced the company to Google co-founder Larry Page. Musk brought his AGI concept to other co-founders of his OpenAI, like CEO Sam Altman.

How much influence do they have?: OpenAI’s dominance in the market has thrown the window open for Overton. Leaders of the world’s most valuable companies, including Microsoft CEO Satya Nadella and Google CEO Sundar Pichai, have been asked and discussed his AGI in interviews. Bill Gates blogs about it. “The benefits of AGI are so great that we believe it is neither possible nor desirable for society to permanently halt its development,” Altman wrote in February.

argument: Destiny shares many beliefs with those in the world of AI safety and frequently participates in the same online forums, This crowd concluded If a powerful enough AI were plugged in, it would wipe out human lives.

Who is behind it?: Yudkowski I’ve been leading voice warnings about this apocalyptic scenario. He is also the author of his popular fan fiction series “Harry Potter and the Methods of Rationality”, a gateway for many young people to these online realms and his ideas about AI.

His nonprofit, MIRI, received an early $1.6 million donation from IT investor Thiel. Open Philanthropy, in partnership with EA, has donated approximately $14.8 million in his five grants from 2016 to his 2020. Most recently, MIRI received funding from crypto nouveau riches, including Ethereum co-founder Vitalik Buterin.

How much influence do they have?: Yudkowsky’s theory is believed by some in this world to be far-sighted, but his work has been criticized for not being applicable to modern machine learning. Still, his views on AI have influenced more prominent voices on these topics, including the noted computer scientist Stuart Russell, who signed an open letter.

Over the past few months, Altman and others have raised Yudkowski’s profile.the latest altman murmured It is possible that at some point [Yudkowsky] Accelerating AGI deserves a Nobel Peace Prize,” he later tweeted a photo of the two at a party hosted by OpenAI.

argument: For years, ethicists have been criticizing larger AI models for their racially and gender-biased output, the explosion of synthetic media that can wreak havoc on information ecosystems, and the impact of seemingly human-sounding AI. I have warned you about a problem with Many argue that the apocalypse narrative exaggerates AI’s capabilities and helps companies tout the technology as part of his sci-fi fantasies.

Some in this camp argue that technology is not inevitable and can be created without harming vulnerable communities. Criticism that sticks to technical competence can ignore decisions people make, allowing companies to avoid accountability for bad medical advice or privacy violations from models.

Who is behind it?: Co-authors of visionary research papers warning about the harm of large-scale language models, including Timnit Gebru, former co-leader of Google’s Ethical AI team, are often cited as leading voices. Important research showing this type of AI failure and how to mitigate the problem “is often done by academics of color, many of whom are black women,” says an underfunded young researcher. Researchers Abeba Birhane and Deborah Raji write: Wired editorial in December.

How much influence do they have?: In the midst of the AI ​​boom, tech companies like Microsoft, Twitch, and Twitter are laying off their AI ethics teams. But policy makers and the public have been listening.

Suresh Venkatasubramanian, a former White House policy adviser who helped draft the AI ​​Bill of Rights blueprint, told VentureBeat that recent exaggerated claims about ChatGPT’s capabilities are part of a “coordinated fear-mongering campaign” around generative AI. department and undermined the stopped work. A real AI problem. Gebru said in the European Parliament he spoke about the need to slow down the movement of AI, slowing the pace of the industry and ensuring that public safety comes first.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *