AI Ruiners Are ‘Cults’ — Here’s the Real Threat, Says Marc Andreessen

AI News


  • On Tuesday, venture capitalist Marc Andreessen released a nearly 7,000-word letter outlining his views on artificial intelligence and the risks it poses.
  • Andreessen emphasizes that despite the fact that AI can trick people into thinking differently because of its ability to mimic human language, AI is not sentient. .
  • “AI doesn’t want, has no goals, doesn’t want to kill you because AI doesn’t live,” he wrote.

Marc Andreessen, partner of Andreessen Horowitz

Justin Sullivan | Getty Images

Venture capitalist Marc Andreessen is known for saying, “Software is eating the world.” When it comes to artificial intelligence, he argues, people should stop worrying and just build.

Andreessen released a nearly 7,000-word letter on Tuesday outlining his views on AI, the risks it poses, and the regulations he believes AI will need. In an attempt to counteract all the recent buzz about “AI fatalism,” he offered what seemed an overly idealistic view of its impact.

Andreessen begins with a precise view of AI, or machine learning, which he describes as “mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in a manner similar to how humans do.” application of

Despite the fact that AI can mimic human language and can trick some people into believing otherwise, he said, AI is not sentient. It has been trained on human language to find high-level patterns in that data.

“AI doesn’t want, has no goals, doesn’t want to kill you because AI doesn’t live,” he wrote. “Also, the AI ​​is a machine. Just like the will of the toaster, the AI ​​will not come back to life.”

Andreessen writes that there is currently a “wall of fearmongering and apocalypticism” in the world of AI. He didn’t name him, but he’s likely referring to claims by prominent technology leaders that the technology poses an existential threat to humanity. Last week, Microsoft founder Bill Gates, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis and others spoke about AI on the “Risk of Extinction from AI”. Signed a letter from the Safety Center.

The motivation for technology company CEOs to spread such eschatology is that “government-benefited cartels of AI vendors will form and regulatory barriers will be erected to protect them from start-up and open-source competition.” It’s in a much more lucrative position,” Andreessen wrote.

Many AI researchers and ethicists have also criticized eschatology. One argument is that too much focus on the growing power of AI and its future threats distracts from the real harm some algorithms cause to marginalized communities in the present, rather than in the unspecified future. It is said that

But most of the similarities between Andreessen and the researcher end here. Andreessen writes that those in roles such as AI safety experts, AI ethicists, and AI risk researchers are “paid as wrecks, and their statements should be dealt with appropriately.” In fact, many leaders in the AI ​​research, ethics, and trust and safety communities have voiced their clear opposition to this disastrous policy, instead citing the technology risks documented today. We focus on mitigation.

Instead of acknowledging the documented real-world risks of AI (whose biases can affect facial recognition systems, bail decisions, criminal justice procedures, mortgage approval algorithms, etc.), Andreessen argues that AI ” It could be ‘a way to make everything we care about better’.

He argues that AI has great potential for productivity, scientific progress, creative arts, and reducing wartime mortality.

“Anything that people do today with their natural intelligence, they can do much better with AI,” he wrote. “And we will be able to tackle new challenges that would have been impossible without AI, from curing any disease to enabling interstellar travel.”

AI has made great strides in many areas, such as vaccine development and chatbot services, but the documented harm of the technology has led many experts to say it should never be used in certain applications. I am concluding.

Andreessen describes these fears as an irrational “moral panic.” He also promoted a return to the tech industry’s old “move fast and break things” approach, encouraging both AI giants and start-ups to “build AI as quickly and aggressively as possible.” should,” and that the technology will “accelerate tremendously.” From here on out—just leave it alone. ”

Andreessen, who rose to fame in the 1990s for developing the first popular Internet browser, founded a venture with Ben Horowitz in 2009. Two years later, Andreessen wrote an oft-cited blog post titled “Why Software Is Eating The World”, stating: He argued that healthcare and education, like many industries before them, needed a “software-based transformation.”

The fear many people have about AI is precisely that it will eat the world. Andreessen says more needs to be done than just allaying these concerns. He encourages the controversial use of AI itself to protect people from prejudice and harm caused by AI.

“Governments working in partnership with the private sector should vigorously address each area of ​​potential risk in order to leverage AI to maximize society’s defensive capabilities,” he said.

In Andreessen’s own idealistic future, “every child will have an AI tutor who is infinitely patient, infinitely caring, infinitely knowledgeable, and infinitely helpful.” A similar vision of AI’s role as a partner and collaborator for everyone from scientists, teachers, CEOs, government leaders and even military commanders.

Near the end of the post, Andreessen points out what he calls “the real risks of not pursuing AI with maximum power and speed.”

The risk lies in China, which is rapidly developing AI and developing authoritarian applications of much concern, he said. According to documented cases over the years, the Chinese government has turned to surveillance AI, using facial recognition and phone GPS data to track and identify protesters.

“We should introduce AI into our economy and society as quickly and forcefully as possible,” Andreessen wrote, to stop China from expanding its AI influence.

In addition, he has proposed plans for aggressive AI development on behalf of big tech companies and startups, leveraging “the full force of the private sector, scientific institutions and governments.”

Andreessen writes with some certainty about where the world is heading, but he’s not always great at predicting what will happen.

His company launched a $2.2 billion crypto fund in mid-2021, just before the industry began to collapse. And one of the big bets during the pandemic is an investment in social audio startup Clubhouse, which will be valued at $4 billion while people are holed up in search of alternative entertainment. soared. Clubhouse announced in April that it would lay off half of its workforce in an effort to “reset” the company.

Throughout his essay, Andreessen criticizes the ulterior motives that others have in voicing their opinions about AI publicly. But he has his own. He wants to make money from his AI revolution and is investing in startups with that goal in mind.

“I don’t think they are reckless or bad guys,” he concluded in the post. “They are all heroes. My company and I are thrilled to support as many of them as we can and we stand by them and their work 100%.”

clock: CNBC interview with Altimeter Capital’s Brad Gerstner



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *