Jeffrey Hinton, AI, and Google’s Ethics Issues

AI News


Debates about the dangers of artificial intelligence, real and imagined, are heating up, much of it driven by the growth of the world of generative chatbots. When scrutinizing critics, you should pay attention to their motives. What do they get out of taking a particular stance? In Jeffrey Hinton’s case, he’s unscrupulously considered the “godfather of AI” and is under more scrutiny than most. It should be.

Hinton comes from the “connectionism” school of AI. AI is a once discredited field that envisions the human brain and, more broadly, neural networks that mimic human behavior. Such a view is at odds with the “symbolists” who focus on AI being controlled by machines and holding certain symbols and rules.

Contributed by John Thornhill financial timessaid, along with other members of the Connectionist Tribe, of Hinton’s rise: The mainstream AI community can no longer be ignored. ”

Before long, deep learning systems were all the rage, and the big tech world clamored for names like Hinton. Along with his colleagues, he has risen to command exorbitant salaries at the top of Google, Facebook, Amazon and Microsoft. At Google, Hinton served as Vice President and Engineering Fellow.

Hinton’s departure from Google, specifically his role as head of the Google Brain team, has sparked speculation. One way of thinking was that the incident came about because he was criticizing the very company that had supported his work over the years. Given Hinton’s own role in advancing generative AI, it was certainly a little richer. In 2012, he developed a self-training neural network that could identify common objects in photographs with a fair degree of accuracy.

Timing is also interesting. Just over a month ago, an open letter was published by the Future of Life Institute warning of the horrifying implications of AI beyond the evils of OpenAI’s GPT-4 and other allied systems. Many questions were raised. “Should machines flood the information channels with propaganda and deceit? Should all jobs be automated, even the most challenging? , should we develop non-human minds that might replace us, should we risk losing control of civilization?

The letter calling for a six-month moratorium on developing such large-scale AI projects came up with a number of names that somewhat devalue the warning. After all, many signatories did not play a negligible role in promoting automation, obsolescence, and “loss of control over our civilization.” For that reason, if Elon Musk, Steve Wozniak and others add signatures to a project that calls for a moratorium on technological development, bullshit detectors around the world will be shaken.

The same principle should apply to Hinton. He’s obviously looking for another ranch, and in doing so, is grooming himself with a lot of self-promotion. This takes the form of a light rebuke of what he was responsible for creating. “The idea that this could actually make us smarter than humans — a few people believed it. But most thought it was outrageous. And it was outrageous.” thought.
[…] Of course, I don’t think about that anymore. ’ You would think he should know better than the rest.

Hinton’s Twitter If you have any suggestions please let me sleep Either he was going to quit Google out of bitterness, or he had no intention of abandoning the Google business. “On the NYT today, Cade Metz implied that I left Google to criticize Google. In fact, I quit to talk about the dangers of AI without considering its impact on Google.” Google acted very responsibly.”

This rather odd form of reasoning suggests that while any criticism of AI exists independently of the very companies that profit from developing such projects, developers like Hinton are criticized for collusion. suggests avoidance. The fact that he appears incapable of developing an AI critique or proposing a regulatory framework within Google undermines the sincerity of the move.

In response to the departure of a longtime colleague, Jeff Dean, chief scientist and head of Google DeepMind, also revealed that the oceans will remain calm, much to everyone’s delight. “Jeff has made fundamental breakthroughs in his AI and we are grateful to him for his ten years of contributions to Google.” […] As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We are continuously learning how to understand new risks while innovating boldly. ”

Many in the AI ​​community felt that something else was going on. Computer scientist Roman Yampolski respond
In response to Hinton’s remarks, I correctly observed that concerns about AI safety are not, and should not be, mutually exclusive with research within the organization. “We should normalize our interest in AI safety without quitting our jobs. [sic] My job as an AI researcher. ”

To be sure, Google has what might be called an ethical issue with AI development. The organization has been quite diligent in covering up internal discussions on the matter. Margaret Mitchell, a former member of Google’s ethics AI team she co-founded in 2017, was found guilty after an internal investigation into the firing of fellow team member Timnit Gebreu.

Gebru was scalped in December 2020 after co-authoring a book on the dangers arising from the use of trained and data-rich AI. Both Gebrew and Mitchell have been critical of the remarkable lack of diversity in the field, with the latter describing it as “a sea of ​​bastards.”

As for Hinton’s own philosophical dilemmas, they are far from sophisticated and unlikely to disturb his sleep. Whatever role Frankenstein played in creating the very monster he now warns about is unlikely to disturb his sleep. “I console myself with the usual excuse: If I didn’t do it, someone else would have,” Hinton explained. new york times. “It’s difficult to see how we can prevent bad actors from exploiting it for their own mischief.”

Dr. Vinoy Kampmark was a Commonwealth Scholar at Selwyn University, Cambridge. He is currently lecturing at RMIT University. Email: bkampmark@gmail.com

© Scoop Media






Source link

Leave a Reply

Your email address will not be published. Required fields are marked *