The Godfather of AI says there is an important difference between Openai and Google

AI For Business


When it comes to winning an AI race, the “AI Godfather” believes there is an advantage in having nothing to lose.

In an episode of the “CEO's Diary” podcast that aired on June 16th, Geoffrey Hinton laid out what Openai and his former employer Google considered an important difference in how AI safety was treated.

“When they had these big chatbots, they didn't release them because they were worried about their reputation,” Hinton said of Google. “They had a very good reputation and didn't want to hurt it.”

Google released the AI ​​chatbot, Bard in March 2023, but was incorporated into a larger suite of larger language models, later called Gemini. However, since Openai released ChatGpt at the end of 2022, the company has caught up.

Having earned the nickname for his pioneering work in neural networks, Hinton explained in an episode of the podcast that it is important that Openai moves faster.

Speaking at an All-Hand meeting shortly after the announcement of ChatGpt, Google's then-AI head said the company wasn't planning on releasing chatbots anytime soon due to “reputation risks,” adding that “it needs to be a more conservative choice than a small startup.”

Demis Hassabis, CEO of Google Deepmind, the company's AI boss, said in February this year that AI could pose potential long-term risks, making the agent system “out of control.” He advocated that there would be a governing body that regulates AI projects.

Gemini has made several well-known mistakes since its launch, biasing its written responses and image generation capabilities. Google CEO Sundar Pichai tackled the controversy in a note to staff last year, stating that the company was “wrong” and vowed to change.

“The Godfather” saw the decision-making of Google's early chatbots from within. He spent over 10 years at the company to speak more freely about what he describes as the dangers of AI. However, in an episode of Monday's podcast, Hinton said he didn't face internal pressure to remain silent.

“Google encouraged me to stay in AI safety and told me that with AI safety you can do whatever you like,” he said. “You censor yourself. If you work for a large company, you don't feel right to say something that will damage a large company.”

Overall, Hinton said he thinks Google “actually acted very responsible.”

Hinton wasn't that sure about Openai, but he never worked for the company. When asked in the first half of the episode if the company's CEO, Sam Altman, had a “moral compass,” he said, “We'll see.” He added that he personally doesn't know Altman and would not like to comment further.

Openai has faced criticism in recent months for approaching safety in a different way than it had before. In a recent blog post, the company said it would only change its safety requirements after it confirmed that it “does not significantly increase the overall risk of serious harm.” The focus area for safety includes cybersecurity, chemical threats, and the power of AI to independently improve.

In an interview at TED2025 in April, Altman defended Openai's approach to safety, saying the company's preparation framework outlines “where I think there are the most important moments of danger.” Altman also admitted in an interview that Openai relaxed some restrictions on the behavior of the model based on user feedback on censorship.

Previous competition for releasing Openai and Google's early chatbots has been fierce, and AI talent racing is intensifying. A document reviewed by Business Insider reveals that Google relied on ChatGpt in 2023.

Representatives from Google and Openai did not respond to BI's request for comment.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *