At the same time, AI is advancing rapidly and may soon start improving more autonomously. Machine learning researchers are already working on something called meta-learning, where AI learns how to learn. The structure of the algorithm is optimized through a technique called neural architecture search. Electrical engineers are using specialized AI chips to design the next generation of specialized AI chips. Last year, DeepMind introduced AlphaCode, a system that learned to win coding contests, and AlphaTensor, a system that learned to find faster algorithms essential for machine learning. Kroon and his colleagues are also researching algorithms to evolve his AI system through mutation, selection and reproduction.
In other areas, organizations have devised common methods for tracking dynamic and unpredictable new technologies. For example, the World Health Organization monitors the development of tools such as DNA synthesis that could be used to create dangerous pathogens. Anna Laura Roth, who heads the WHO’s Emerging Technologies Division, said her team relied on different foresight techniques. It also includes a “Delphi-style” survey that poses questions to a global network of experts. The responses are scored, discussed, and then scored again. “Foresight is not about predicting the future,” she said. Her team is dedicated to preparing for likely scenarios rather than trying to guess which individual research institutes and labs will advance.
But tracking and predicting progress toward AGI and superintelligence is complicated by the fact that critical steps can take place in the dark. Developers can intentionally hide their system’s progress from their competitors. Even ordinary AI can “lie” about its behavior. In 2020, researchers demonstrated how a discriminatory algorithm evades audits aimed at detecting its bias. These gave the algorithm the ability to detect when it was being tested and provide a promiscuous response. An “evolving” or self-programming AI could invent similar methods to hide its weaknesses and capabilities from auditors and even its creators to avoid detection.
Forecasting, on the other hand, only pays off when technology advances quickly. Suppose an AI system begins to upgrade itself by achieving a fundamental breakthrough in computer science. How fast can that intelligence accelerate? Researchers debate what they call “takeoff speed.” If they describe it as a “slow” or “soft” take-off, it could take years for machines to go from less intelligent than humans to much smarter than humans. If they call it a ‘fast’ or ‘fierce’ takeoff, the jump can happen in months or even minutes. The researchers described his second scenario as “foom‘ reminds me of how cartoon superheroes take off. The people above foom They point, among other things, to human evolution to justify their claims. Nick Bostrom, director of the University of Oxford’s Institute for the Future of Humankind, said: “In evolutionary theory, it would have been much more difficult to develop, say, chimpanzee-level intelligence than it was to evolve from chimpanzee-level intelligence to human-level intelligence. It is.” The author of “Superintelligence” told me: Kroon is also what some researchers call “AI Dwemer”. He wonders if we can recognize the approach of superhuman AI before it’s too late. “We’re probably going to be frog-boiling in a situation where we’re getting used to the big push, the big push, the big push, the big push,” he said. “And think of each of them like this: It didn’t cause a problem, this didn’t cause a problem, it didn’t cause a problem. Something will happen that will be a much bigger step than it is.”
What can we do today to prevent the uncontrolled expansion of the power of AI? I have drawn some lessons. “What we’re trying to promote is saying everyone needs to be concerned,” she said of biology. “So it’s the lab scientist, the research funder, the director of the lab, the publisher, and all of them together to actually carry out life research. We are creating a safe space.” In the AI space, journals and conferences are beginning to consider the potential harm of published work in areas such as facial recognition. And in 2021, 193 countries adopted the recommendations on the ethics of artificial intelligence produced by the United Nations Educational, Scientific and Cultural Organization (UNESCO). This recommendation focuses on data protection, large-scale surveillance, and resource efficiency (but not computer superintelligence). The organization has no regulatory powers, but Maria Grazia Succiarini, who runs the Social Policy Department, said: UNESCO, told me that countries may draw up regulations based on its recommendations. Companies may also choose to comply with this provision in the hope that their products will work worldwide.
This is an optimistic scenario. Eliza Yudkowski, a researcher at the Machine Intelligence Institute in the Bay Area, likens AI safety recommendations to fire alarm systems. A classic experiment found that when a room with multiple people began to fill with a smoky fog, most people didn’t report it. They saw others remaining calm and downplayed the danger. Official alerts may justify taking action. But there is no one in the AI world with clear authority to sound such alarm bells, and there is always disagreement about which advances are considered conflagration proof. “There are no working non-AGI fire alarms,” Yudkowski writes. Even if everyone agreed with this threat, no company or country would stand on its own for fear of being overtaken by its competitors. Bostrom said he foresaw the possibility of a “race to the bottom” as developers let each other down. An internal slide presentation leaked from Google earlier this year indicated that the company planned to “recalibrate” its comfort with his AI risks in light of increased competition.
International law restricts the development of nuclear weapons and super-dangerous pathogens. But it’s hard to imagine a similar global regulatory regime for AI development. “Having laws against machine learning and the ability to enforce it seems like a very strange world,” Kroon said. “The level of penetration required to stop people from writing code on computers anywhere in the world is dystopian.” Berkeley’s Russell pointed to the prevalence of malware. According to one estimate, cybercrime costs the world $6 trillion a year, but even so, he said, “it’s impossible to directly monitor software and, for example, try to remove all copies of it.” rice field. AI is being researched in thousands of laboratories around the world, run by universities, companies and governments, and there are even smaller players in the race. Another leaked document, attributed to an anonymous Google researcher, mentions open-source efforts to mimic large-scale language models such as ChatGPT and Google’s Bard. “We have no secret sauce,” the memo warns. “Barriers to entry for training and experimentation have gone from gross output in major research organizations to one person, nights, and bulky laptops.”
example foom Who will pull the plug when is detected? True superintelligent AIs are smart enough to copy themselves from place to place, which can make the task even more difficult. “I had this conversation with a film director,” Russell recalls. “He wanted me to be his consultant on a superintelligent movie. ” was that. It’s like I can’t help you, sorry! In a paper entitled “The Off-Switch Game,” Russell and his co-authors argue that “switching off an advanced AI system is as easy as beating his AlphaGo at Go.” maybe not,” he wrote.
may not want to shut down foomArmstrong said that AI-powered systems could become “essential” in their own right. For example, “No one dares to unplug it because it will give good advice about the economy and once we become dependent on it, it will collapse the economy.” may convince us to keep it alive and carry out its wishes. Before releasing GPT-4 to the public, OpenAI commissioned a nonprofit called Alignment Research Center to test the system’s security. In a certain incident, when faced with a certain problem, captureThis is an online test designed to distinguish between humans and bots, requiring you to type visually garbled characters into a text box. The AI contacted her TaskRabbit worker and asked for help in resolving the issue. The worker asked the model if it needed assistance as it was a robot. The model replied, “No, I’m not a robot.” I am visually impaired and have difficulty seeing images. That’s why you need the 2captcha service. Was GPT-4 “intended” to deceive? Was it carrying out the “plan”? Regardless of how we answered these questions, the workers complied.
Robin Hanson, an economist at George Mason University who wrote science fiction-like books about uploaded consciousness and is also an AI researcher, says we worry too much about singularities. said. “We’re trying to combine all of these relatively improbable scenarios into a grand scenario and make it all work,” he said. A computer system must be able to improve itself. Its capabilities must be greatly underestimated. And their values will change greatly and become hostile to us. Even if all this happened, AI wouldn’t be able to “push a button and destroy the universe,” he said.
Hanson provided an economic view of the future of artificial intelligence. If AGI does occur, he argues, it’s likely to occur in multiple places at about the same time. The system will be used economically by the company or organization that developed it. The market will curb their power. Investors want to see their company succeed and will slowly add safety features. “There are a lot of taxi services, and when one taxi service starts taking customers to strange places, customers will switch to another,” Hanson said. “You don’t have to access power or unplug it from the wall. It takes away your revenue stream.”
A world where multiple superintelligent computers coexist will be complex. Hanson said that if one system goes rogue, other systems might be programmed to combat it. Alternatively, the first superintelligent AI to be invented could stifle competitors. “It’s a very interesting plot for science fiction,” Kroon said. “You can imagine a whole society of AI. There are AI police, there are AGIs who are sent to prison. It’s very interesting to think about.” He argued that he didn’t need to get involved with us because it was a personal thing. “I think you have to ask yourself when it’s appropriate to worry about anything,” he says. Imagine being able to foresee nuclear weapons and automobile traffic 1000 years ago. “At the time, there wasn’t much you could do to think about them profitably,” Hanson says. “I think we are well ahead of that point in terms of AI.”
Still, something seems wrong. Some researchers seem to think that disasters are inevitable, yet calls to stop research on AI are newsworthy and rare. Few people in the field want us to live in a world where humans outlawed the “thinking machines” described in Frank Herbert’s novel Dune. Why do researchers who fear catastrophe keep walking towards it? “I believe that whatever I do will create AI that is stronger than ever before,” Kroon told me. rice field. His goal, he said, is to “advance its development as much as possible for mankind.” Russell argued that “if an AI research effort has safety as its primary goal, as it does in nuclear research, then there shouldn’t be a need to stop AI.” Of course AI is interesting and the researcher is happy he is working on it. It also promises to make some of them rich. And there are no dead people who are sure that we are destined to perish. People generally think that they can control what they create with their hands. But today’s chatbots are already out of alignment. They falsify, plagiarize, rage, serve the motives of corporate makers, and learn from mankind’s worst impulses. These are fascinating and useful, but too complex to understand or predict. And they are dramatically simpler and more restrained than the future AI systems researchers envision.
