Keeping superintelligent machines docile is the challenge of our time

Machine Learning


the writer is the founder Siftedan FT-backed site about European startups

British mathematician IJ Goode was one of the first to speculate on what would happen when computers surpassed humans. One day, he writes, we will build super-intelligent machines that can single-handedly design even more intelligent machines, causing an “intelligence explosion.” “Thus, the first superintelligent machines last As long as the machine is obedient enough to tell us how to keep it under control, humans need to make inventions.

Goode’s speculation seemed fantastic when it came out in 1964, but it seems less fantastic today. Recent advances in artificial intelligence, highlighted by powerful generative AI models such as OpenAI’s GPT-4 and Google’s Bard, have fascinated millions. There may still be some conceptual breakthroughs before Goode’s superintelligent machine is created, the founder of a major AI company tells me. But it’s no longer “totally crazy” to believe that so-called artificial general intelligence could be achieved by 2030.

Companies developing AI technologies rightly highlight their potential to increase economic productivity, enhance human creativity, and open up exciting new avenues for scientific research. But he also acknowledges that generative AI models have serious flaws. “The downside is that at some point humanity loses control over the technology it is developing,” Google CEO Sundar Pichai told his CBS News candidly.

More than 27,000 people, including several leading AI researchers, have signed an open letter from the Future of Life Institute calling for a six-month moratorium on development of cutting-edge models. Others have gone further by demanding that all research on AGI be stopped. Eliezer Yudkowsky, principal investigator at the Machine Intelligence Research Institute, said that if nothing changes, the most likely result of building “superhumanly intelligent AI” is that “literally every person on the planet could He claims to die. The use of advanced computer chips used for AI should be closely monitored, and air strikes against rogue data centers that defied the ban should be considered, he wrote in Time.

Hyperventilation stories like this infuriate other researchers who argue that AGI could remain a fantasy forever, and that any discussion of it only masks the technology’s current harm. For years, researchers such as Timnit Gebru, Margaret Mitchell, Angelina McMillan-Major, and Emily Bender have argued that powerful machine learning models will further focus corporate power, exacerbate social inequalities, and pollute public information. warned of the risk of They argue that it’s dangerous to be distracted by “a fanciful, AI-enabled utopia or apocalypse.” “Instead, we need to focus on the very real and very current exploitative practices of the companies that claim to build them,” they wrote in response to the FLI’s letter.

We can only sympathize with policymakers trying to address these conflicting concerns. How should regulatory efforts be prioritized? Simply put, they need to take both concerns seriously and distinguish between immediate and long-term risks. .

As John Tasioulas, director of the University of Oxford’s Institute for AI Ethics, observes, the AI ​​safety crowd and the AI ​​ethics crowd are “engaging in a civil war,” as he puts it. Visibility certainly doesn’t help. However, he suggests that they mostly discuss different things. Ethicists argue that AI must be viewed in a much broader social and economic context.

On the agenda at hand, all regulators are considering how AI will affect their sector, enforcing existing human rights, privacy, data and competition rules and how to update them. should be considered. For long-term challenges, we need to discuss more radical approaches.

In a recent article, investor Ian Hogarth urged lawmakers to scrutinize AI lab leaders under oath for security risks. He also called for the creation of a joint international institution, modeled after the Cern Institute for Particle Physics, to study AGI. This neutralizes the dangerous dynamics of the private sector technology race.

It’s a smart idea, even if it’s hard to imagine how such an international organization could be created so quickly. In pursuing the potential of AGI, it is insane to think that only profit-seeking private companies will protect the interests of society. Keeping machines docile enough to be controlled by humans, as Goode hoped, will be the governance challenge of our time.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *