Hinton made headlines in May, shortly after the release of ChatGPT captured the world’s imagination, by announcing that he had left Google after a decade of employment to talk more freely about the dangers of AI.
A highly respected AI scientist based at the University of Toronto was speaking to a packed audience at a collision technology conference in a Canadian city.
The conference will bring together more than 30,000 startup founders, investors and tech insiders, most of whom are trying to learn how to ride the AI wave without hearing lessons about the dangers of AI. bottom.
“Before AI gets smarter than us, I think the people building it should be encouraged to put in a lot of effort to understand how it tries to take control. ‘, said Hinton.
“Right now there are 99 very smart people trying to make the AI better, and 1 very smart person trying to figure out how to stop the AI takeover. Maybe you guys are more balanced. would like to take,” he said.
Hinton warned that the risks of AI should be taken seriously, despite critics who say they overplay the risks.
“I think it’s important for people to understand that this isn’t science fiction and it’s not just horror-mongering,” he insisted. “This is a real risk that we have to think about and think about in advance how to deal with it.”
Hinton also expressed concern that AI would deepen inequality, as the significant productivity gains from AI adoption would benefit the wealthy rather than the workers.
“Wealth does not go to those who work. It is used to make the rich richer, instead of making the poor richer, which is very bad for society,” he said. added.
He also pointed out the dangers of fake news created by ChatGPT-style bots, and suggested that AI-generated content could be marked in a way similar to how central banks watermark cash. said he hopes to become
“For example, it’s very important that we try to mark everything that’s fake as fake. I don’t know if that’s technically possible,” he said.
The European Union is considering such technology in its AI law, the law that will set the rules for AI in Europe, and is currently being negotiated by lawmakers.
“Mars Overpopulation”
Hinton’s list of AI dangers contrasted with the discussion at the conference, which was less about safety and threats and more about seizing the opportunities that emerged after ChatGPT.
Quoting another AI guru, Andrew Ng, venture capitalist Sara Guo said it was too early to talk about AI as an existential threat, calling it “Mars’ overpopulation. He said that it was an analogy to “talking about.”
He also warned against “regulatory uptake” that protects incumbents through government intervention before they have the opportunity to benefit areas such as health, education and science.
Opinions were divided on whether the current generative AI giants (primarily Microsoft-backed OpenAI and Google) will continue to be unparalleled, or whether new actors will expand the field with their own models and innovations. .
“Five years from now, if you want to go looking for the best, most accurate, state-of-the-art general-purpose model, you’re probably going to have to go to one of the few companies that have the capital to do it, I think. I’m still imagining it,” said Lee Marie Braswell of venture capital firm Kleiner Perkins.
Gradient Ventures’ Zachary Bratun-Glennon said he sees a future where “much like networks of websites today, there will be millions of models on the network.”
