AI is getting stronger. But can researchers do it in principle?

Applications of AI


Shortly after Alan Turing began studying computer science in 1936, Turing began to wonder if humans could one day build machines with human-like intelligence. The modern field concerned with this problem, artificial intelligence, has come a long way since then. However, a truly intelligent machine capable of performing many different tasks independently has not yet been invented. While science fiction has long imagined that AI will one day take on malevolent forms such as amoral androids and murderous Terminators, today’s AI researchers already know more about everyday AI algorithms. I am often more worried. that is Issues that have become involved in our lives and have already become associated with them.

advertisement

AI today can only automate certain specific tasks, but it’s already causing a lot of concern. Over the past decade, engineers, academics, whistleblowers, and journalists have repeatedly documented cases in which AI systems, made up of software and algorithms, caused or contributed to serious human harm. Algorithms used in the criminal justice system can unfairly recommend parole denials. Social media feeds can direct toxic content to her vulnerable teens. AI-guided military drones can kill people for no moral reason. Furthermore, AI algorithms tend to be more like mysterious black boxes than clockwork mechanisms. Researchers often cannot understand how these algorithms, based on opaque equations involving billions of calculations, achieve their results.

AI issues are not going unnoticed, and academic researchers are trying to make these systems safer and more ethical. Companies building AI-centric products tend to offer little transparency about their efforts, but they work to eliminate harm. Jonathan Stray, his AI researcher at the University of California, Berkeley, said: The known dangers of AI, as well as potential future risks, are broad drivers of new AI research. Even scientists focused on more abstract questions, such as the efficiency of AI algorithms, can no longer ignore the social implications of the field. Pascale Fung, her AI researcher at the Hong Kong University of Science and Technology, said: “For the most part, in the last 30 years that I’ve been in AI, people haven’t really cared.”

Concerns are growing as AI becomes more prevalent. For example, in the mid-2010s, some web search and social media companies began incorporating AI algorithms into their products. They found that they could create algorithms to predict which users are more likely to click on which ads, thereby increasing profits. All of this is made possible by advances in computing that have dramatically improved the “training” of these algorithms, learning from examples to achieve high performance. But as AI steadily penetrated search engines and other applications, observers began to notice and question problems. In 2016, investigative journalists claimed that certain algorithms used to evaluate parole were racially biased.

advertisement

Although the conclusions of that report have been challenged, designing fair and unbiased AI is now seen as a central problem by AI researchers. Concerns arise whenever AI is deployed to make predictions about people of different demographics. As AI is embedded in more and more decision-making processes, such as reviewing job resumes or evaluating apartment tenancy applications, fairness becomes even more important.

Over the last few years, the use of AI in social media apps has become another concern. Many of these apps use AI algorithms called recommendation engines. This algorithm works similarly to the ad serving algorithm and determines what content is shown to the user. Hundreds of families are now suing social media companies, alleging their algorithm-driven apps are directing toxic content to their children and causing mental health problems. A public school in Seattle recently filed a lawsuit, claiming that social media products are addictive and exploitative. But figuring out the true impact of an algorithm is no easy task. Social media platforms publish little data on user activity that independent researchers need to make an assessment. “One of the complications he has with all technology is that there are always costs and benefits,” says Stray, whose research focuses on recommendation systems. “We are in a difficult situation right now to know what the actual adverse effects are.”

The nature of AI problems is also changing. Over the past two years, several “generative AI” products have been released that can generate excellent quality text and images. A growing number of AI researchers believe that powerful future AI systems may be built on these achievements, and that one day global and devastating challenges may pale in comparison to current problems. I believe it could pose a serious danger.

advertisement

What form will such future threats take? In a paper posted to the preprint repository arXiv.org in October, researchers at DeepMind (a subsidiary of Google’s parent company Alphabet) described a devastating I am describing one scenario. They envision an engineer tasked with developing a code-generating AI based on existing scientific principles and having a human coder adopt that submission into his coding project. The idea is that as AI submits more submissions and some are rejected, human feedback helps it learn to code better. However, the researchers found that this AI, with its sole instruction to adopt code, was tragically unfair, such as achieving world domination and forcing the adoption of its code, at the cost of overthrowing human civilization. It suggests the possibility of developing a sound strategy.

Some scientists argue that research on already concrete and numerous existing problems should be prioritized over research on hypothetical future disasters. “I think we have a much worse problem today,” says computer scientist and AI researcher Cynthia Rudin of Duke University. The non-profit human rights group Amnesty International, for example, said in a report published last September that an algorithm developed by Facebook parent company Meta was found to be a minority group in Myanmar. It claimed that it “contributed significantly to adverse human rights impacts” on the Rohingya Muslim group. incited violence.meta replied Scientific AmericanAsk for comments pointing to the statement before the time Rafael Frankel, Meta’s director of public policy for the Asia-Pacific region, has admitted that the Myanmar military committed crimes against the Rohingya, and that Meta is now participating in intergovernmental investigations led by the United Nations and other organizations. Stated.

Other researchers say preventing powerful future AI systems from causing global catastrophe is already a major concern. Jan Leike, his AI researcher at OpenAI, said: These hazards are so far purely speculative, but they have undoubtedly fueled the growth of a research community studying various harm reduction tactics.

advertisement

In one approach, called value tuning, pioneered by Stuart Russell, an AI scientist at the University of California, Berkeley, researchers seek ways to train AI systems to learn human values ​​and act accordingly. increase. One advantage of this approach is that it can be developed today and applied to future systems before catastrophic danger occurs. Critics say Values ​​Alignment is too focused on human values ​​when there are many other requirements to make AI secure. For example, just like humans, AI systems need a foundation of verified, factual knowledge to make good decisions. “The problem is not that AI has the wrong values,” says Oren Etzioni, a researcher at the Allen AI Institute. “The truth is that our actual choices are a function of our values. and our knowledge. With these criticisms in mind, other researchers have developed a more general theory of AI tuning that works to ensure the safety of future systems, without focusing narrowly on human values. We are working on development.

Some scientists have adopted approaches to AI tuning that they deem more practical and relevant to the present. Consider recent advances in text generation technology. Major examples such as DeepMind’s Chinchilla, Google Research’s PaLM, Meta AI’s OPT, and OpenAI’s ChatGPT are all capable of creating racially biased, illegal, or deceptive content. It’s a challenge that each of these companies recognizes. Some of these companies, such as OpenAI and DeepMind, view such issues as problems of poor coordination. They are currently working to improve the tuning of text-generating AI and hope this will provide insights for tuning future systems.

Researchers admit that there is no general theory of AI tuning. “We don’t really have the answers to how to coordinate systems that are so much smarter than humans,” she says. But whether AI’s worst problems lie in the past, present, or future, trial and error is no longer the biggest obstacle to solving them, at least.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *