Top Google Boffin Hinton resigns, warns of AI dangers, regrets some life’s work • The Register

AI and ML Jobs


Machine learning pioneer Geoffrey Hinton, best known for his work on neural networks, has resigned from Google to speak candidly about the dangers of AI.

Hinton, 75, is a professor of computer science at the University of Toronto and a former top Google researcher who began working with neural networks long before they became popular. Over the decades, he has developed new artificial intelligence algorithms and architectures and created techniques to train models and process data. His research paved the way for the current machine learning boom.

In 2018, he won the prestigious Turing Award for his work on deep learning, along with Yoshua Bengio, Professor of Computer Science at the University of Montreal, and Yann LeCun, Chief AI Scientist at Meta.

Hinton said he resigned from Google last month and said part of him now regrets a lifetime of work in the field. “I console myself with the usual excuses. If I hadn’t done it, someone else would have.” register Vulture’s Cade Metz currently writes for the NYT.

If I didn’t do it, someone else would have

Hinton said he became increasingly concerned about the risks of AI, especially after Google built and deployed its own web search chatbot interface, Bard, to compete with Microsoft’s machine-learning-powered Bing. It is said that it became

At least for us, Microsoft grabbed the chatbot ball from OpenAI, ran with it, flew the tech across its software empire to impress netizens, and let Google reluctantly play the catch-up game. It seems that. Google invented the Transformer architecture underneath today’s chat interface and uses machine learning extensively behind the scenes, and although it seems uncertain about the positive impact this technology will have on the world, it still The king is keen to appear not left behind. By this I mean all chat interfaces that Google currently has.

As Hinton puts it, commercial interests drive competition among companies, inevitably making it harder to advance technology, push it into everyday life, and reduce its impact on society.

“Look at how it was five years ago and now. Take the difference and spread it forward. It’s scary,” he said.

Hinton added that generative AI tools will make it easy for anyone to create fake images, text, video and audio, making it impossible to know if anything is true or not on the internet.

These types of models can also be instructed to write code, and in the future may be able to run their own programs autonomously, he suggested. Left unchecked, he thought this technology could one day create software and machines that harm humans, so-called killer robots.

“The idea that this would actually make us smarter than humans — a few people believed it,” he said. I thought it was a long time ago.I thought it was 30 to 50 years, or more.Clearly, I don’t think so anymore.”

He opined that today’s ML tools have been adopted across industries, primarily impacting white-collar jobs. Analysts predict that such a model will improve employee and company productivity. This technology can replace jobs while creating new ones. Hinton said AI “takes away the drudgery” but labor disruption “could take away more than that.”

Hinton Said He left Google so he could talk about the dangers of AI without upsetting his employer, and thought the Silicon Valley mogul “acted very responsibly.” ®





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *