“Can Do Both”: Experts Pursue “Good AI” While Avoiding Bad Things | Artificial Intelligence (AI)

Applications of AI


According to the artificial intelligence guru, humanity is now at a crossroads that can be summed up as “AI will get better or AI will get worse.”

“I think there are two futures here,” author Professor Gary Marcus said Friday at the United Nations’ AI for Good Global Summit.

In a brighter version, AI will revolutionize healthcare, help tackle the climate emergency, and provide compassionate care for the elderly. But we may be standing on the cliff of darker alternatives, such as uncontrollable cybercrime, catastrophic conflict, and plunging into anarchy. “I’m not saying what will happen, I’m saying we need to understand what we’re doing,” Marcus said at the summit.

During a week-long event ostensibly focused on the positive, attendees heard a wide range of examples of AI being used for the benefit of humanity. The cast of Robot Ambassadors, whose roving gazes may feel unnerving in person, explore how seniors can maintain their independence longer, or children with autism without being overwhelmed. He presented a new vision of how we should learn about the world.

Google DeepMind Chief Operating Officer Lila Ibrahim explained how the company’s breakthrough in protein folding could transform medicine. Amazon chief technology officer Werner Vogels described a machine vision system that tracks 100,000 salmon kept in pens to detect disease. AI-driven fish farming may not be a very heartwarming image, but he acknowledged that it has the potential to radically reduce the carbon footprint of global food production. While it may be a nod to those who see “AI for good” primarily as his PR effort, Vogels said cutting-edge technology “does not just enable AI for good, At the same time, AI for profit may also be realized.” ”.

But behind the scenes, roundtable discussions between diplomats and invited delegates focused on the pressing question of how to avoid bad AI, not “good AI.”

“It’s not enough for Google to mass-develop artificial intelligence and make it profitable. Nor should they be evil,” says Joanna Ms. Professor Bryson said. “Good and evil may be opposites, but doing good and doing evil are not opposites. You can do both.”

Some say this is a risk even for seemingly positive applications of AI. For example, a robot tasked with fetching coffee might mow down everything and everyone in its path to achieve this narrow goal. ChatGPT is incredibly adept with the language, but he always seems to make things up.

“If humans acted like that, you could say they had a kind of psychosis,” said Stuart Russell, an AI pioneer at the University of California, Berkeley. However, no one fully understands the inner workings of ChatGPT, and it cannot be easily programmed to tell the truth. “There is no place for that rule,” Russell said.

“We know how to make AI that people want, but we don’t know how to make AI that people can trust,” says Marcus.

The problem of how to imbue AI with human values, sometimes called the “coordination problem,” is not a well-defined computational puzzle that can be solved and implemented in law. This means that the question of how to regulate AI is, in addition to important commercial, social and political interests, a large and unrestricted scientific problem that needs to be addressed. increase.

Scientists and some tech companies are grappling with these questions, but sometimes it’s a game of catching up with technology already in place. Marcus used his own presentation to launch the Center for Trusted AI Advancement. The center hopes to function as a philanthropic funded international organization, like Cern, on this subject.

Skip past newsletter promotions

University of Southern California professor Maja Matalik in new research (published in Arxiv) that analyzes the nature of large-scale language models and how they are pro-socially shaped to “keep them safe.” explained. “I don’t want a queer personality,” she said. “Well-designed systems are beneficial to mankind.”

Others would like to see more emphasis on AI already in widespread use, rather than far-flung scenarios of superhuman intelligence that may never materialize.

Professor Sandra Wachter of the University of Oxford, one of the speakers at the summit, said, “Group discrimination, black box problems, data protection breaches, mass unemployment, environmental destruction, these are the real survival risks.” “We need to focus on these issues now and not be distracted by the risks of assumptions.

In any case, there is a rapidly growing consensus among tech companies and governments that governance is necessary. Dr. Reinhard Scholl of the United Nations’ International Telecommunications Union and co-founder of the AI ​​for Good summit says, “It should be completed pretty quickly… half a year or he less than a year.” “People agree that if you have to wait years, it’s not good.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *