“A lot of headlines say you should stop now, but I never said that,” he says. “First of all, I don’t think it’s possible. I think we should keep developing because we can do great things. But equal efforts need to be made to mitigate or prevent possible bad consequences.” I have.”
Hinton says he didn’t leave Google to protest the handling of this new form of AI. In fact, he says the company has moved relatively cautiously despite its leadership in this area. Researchers at Google invented a type of neural network known as a transformer, which is essential in the development of models such as PaLM and GPT-4.
In the 1980s, Hinton, a professor at the University of Toronto, along with several other researchers, tried to make computers smarter by training them with data instead of programming artificial neural networks in the traditional way. It was made. The network digests the pixels as input, looks at more examples, and can adjust the values that connect the coarsely simulated neurons until the system is able to recognize the content of the image. Although this approach has shown promise over the years, it wasn’t until he was a decade ago that its true power and potential became apparent.
In 2018, Hinton won the Turing Award, computer science’s most prestigious award, for his work on neural networks. He won the award alongside two of his other pioneers, Yann LeCun, chief AI scientist at Meta, and his Yoshua Bengio, a professor at the University of Montreal.
At that time, a new generation of multi-layer artificial neural networks fed with large amounts of training data and running on powerful computer chips was far superior to existing programs for labeling photo content.
This technology, known as deep learning, ushered in a renaissance in artificial intelligence. Big tech companies are rushing to recruit AI experts to build increasingly powerful deep learning algorithms and apply them to products like facial recognition, translation, and speech recognition.
Google hired Hinton in 2013 to acquire his company DNNResearch, which was founded to commercialize deep learning ideas for university labs. Two years later, Ilya Sutskever, one of Hinton’s graduate students who also joined Google, left the search company to co-found OpenAI, which will counter the power big tech companies are garnering in AI. It is a non-profit organization that
From the beginning, OpenAI has focused on scaling up the size of neural networks, the amount of data they consume, and the computational power they consume. In 2019, the company was reorganized as an outside-invested for-profit company, after which he received $10 billion from Microsoft. The company has developed a series of highly fluent text generation systems, and its latest, GPT-4, powers a premium version of his ChatGPT, researched in its ability to perform tasks that seem to require reasoning and common sense. surprised people.
Hinton believes we already have disruptive and unstable technology. He points to the risk that more sophisticated language algorithms will be able to deploy more sophisticated misinformation campaigns and interfere in elections, as others have done.
