I can’t help but worry that Geoffrey Hinton, the godfather of artificial intelligence, is leaving Google and regretting his life’s work.
I can’t help but worry that Geoffrey Hinton, the godfather of artificial intelligence, is leaving Google and regretting his life’s work.
Hinton, who made an important contribution to AI research in the 1970s with his work on neural networks, told several news outlets this week that big tech companies were moving too quickly to deploy AI to the public. Part of the problem was that AI was achieving human-like capabilities sooner than experts predicted. “It’s terrifying,” he told the New York Times.
Hinton, who made an important contribution to AI research in the 1970s with his work on neural networks, told several news outlets this week that big tech companies were moving too quickly to deploy AI to the public. Part of the problem was that AI was achieving human-like capabilities sooner than experts predicted. “It’s terrifying,” he told the New York Times.
Hinton’s concerns are certainly valid, but it would have been more effective had he come a few years ago when other researchers without retirements to rely on were sounding the same alarm bells.
Hinton’s concerns are certainly valid, but it would have been more effective had he come a few years ago when other researchers without retirements to rely on were sounding the same alarm bells.
Needless to say, in a tweet Hinton sought to clarify how the New York Times characterizes his motives, and that the article suggested he left Google to criticize it. I was worried about “In fact, I left the company so that I could talk about the dangers of AI without considering its impact on Google. Google has acted very responsibly.”
Needless to say, in a tweet Hinton sought to clarify how the New York Times characterizes his motives, and that the article suggested he left Google to criticize it. I was worried about “In fact, I left the company so that I could talk about the dangers of AI without considering its impact on Google. Google has acted very responsibly.”
Hinton’s prominence in the field may have kept him from fighting back, but the episode highlights a chronic problem in AI research. Big tech companies have so much control over his AI research that many scientists are afraid to voice their concerns for fear of hurting their careers. Outlook.
Hinton’s prominence in the field may have kept him from fighting back, but the episode highlights a chronic problem in AI research. Big tech companies have so much control over his AI research that many scientists are afraid to voice their concerns for fear of hurting their careers. Outlook.
I can understand why. Meredith Whittaker, a former Google Research manager, said she turned to lawyers after she helped organize a strike of 20,000 of her Google employees over the company’s contract with the U.S. Department of Defense in 2018. Had to spend thousands of dollars. “Competing with Google is really, really scary,” she tells me. Now the president of encrypted messaging app Signal, her Whittaker eventually resigned from the search giant after publicly warning about the company’s direction.
I can understand why. Meredith Whittaker, a former Google Research manager, said she turned to lawyers after she helped organize a strike of 20,000 of her Google employees over the company’s contract with the U.S. Department of Defense in 2018. Had to spend thousands of dollars. “Competing with Google is really, really scary,” she tells me. Now the president of encrypted messaging app Signal, her Whittaker eventually resigned from the search giant after publicly warning about the company’s direction.
Two years later, Google AI researchers Timnit Gebru and Margaret Mitchell were fired from the tech giant after publishing a research paper highlighting the risks of large language models. This technology is currently at the center of concerns about chatbots and generative AI. They pointed out issues such as racial and gender bias, ambiguity, and environmental costs.
Two years later, Google AI researchers Timnit Gebru and Margaret Mitchell were fired from the tech giant after publishing a research paper highlighting the risks of large language models. This technology is currently at the center of concerns about chatbots and generative AI. They pointed out issues such as racial and gender bias, ambiguity, and environmental costs.
Whittaker resents the fact that Hinton is now the subject of an impassioned portrait of his contributions to AI. After people took far greater risks to stand up for their beliefs when they were still working at Google. “People with much less power and marginalized positions were taking real personal risks to name the problems of AI and the companies controlling AI,” she says.
Whittaker resents the fact that Hinton is now the subject of an impassioned portrait of his contributions to AI. After people took far greater risks to stand up for their beliefs when they were still working at Google. “People with much less power and marginalized positions were taking real personal risks to name the problems of AI and the companies controlling AI,” she says.
Why didn’t Hinton speak up sooner? The scientist refused to answer the question. However, he appears to have been concerned about AI for some time, including several years when his colleagues were calling for a more measured approach to technology. In his New Yorker article in 2015, he said he was talking to another AI researcher at a conference about how politicians use his AI to terrorize people. When asked why he is still doing research, Hinton replied: The “technically naive” appeal of working on the atomic bomb.
Why didn’t Hinton speak up sooner? The scientist refused to answer the question. However, he appears to have been concerned about AI for some time, including several years when his colleagues were calling for a more measured approach to technology. In his New Yorker article in 2015, he said he was talking to another AI researcher at a conference about how politicians use his AI to terrorize people. When asked why he is still doing research, Hinton replied: The “technically naive” appeal of working on the atomic bomb.
Hinton said Google has acted “very responsibly” in deploying AI. Google also limited rival Bard’s functionality to his ChatGPT.
Hinton said Google has acted “very responsibly” in deploying AI. Google also limited rival Bard’s functionality to his ChatGPT.
But being accountable also means being transparent and accountable. Google’s history of suppressing internal concerns about technology does not inspire confidence.
But being accountable also means being transparent and accountable. Google’s history of suppressing internal concerns about technology does not inspire confidence.
We hope Hinton’s resignation and warning will inspire other researchers at big tech companies to speak out about their concerns.
We hope Hinton’s resignation and warning will inspire other researchers at big tech companies to speak out about their concerns.
Technology conglomerates are eating the brightest minds in academia thanks to the lure of high salaries, generous benefits, and the enormous computing power used to train and experiment with powerful AI models.
Technology conglomerates are eating the brightest minds in academia thanks to the lure of high salaries, generous benefits, and the enormous computing power used to train and experiment with powerful AI models.
Still, there are signs that some researchers are at least considering speaking up. “Consider when to quit [AI startup] Catherine Olson, technical staffer at AI safety firm Anthropic, tweeted in response to Hinton’s comments on Monday. “I already know this move will affect me.”
Still, there are signs that some researchers are at least considering speaking up. “Consider when to quit [AI startup] Catherine Olson, technical staffer at AI safety firm Anthropic, tweeted in response to Hinton’s comments on Monday. “I already know this move will affect me.”
Many AI researchers seem to fatalistically accept that now that generative AI is unleashed upon the world, there is little that can be done to stem the tide. As Anthropic co-founder Jared Kaplan told me in an interview published Tuesday, “The cat came out of the bag.”
Many AI researchers seem to fatalistically accept that now that generative AI is unleashed upon the world, there is little that can be done to stem the tide. As Anthropic co-founder Jared Kaplan told me in an interview published Tuesday, “The cat came out of the bag.”
But if today’s researchers spoke up at a critical time, rather than right before they retired, all of us would likely benefit.
But if today’s researchers spoke up at a critical time, rather than right before they retired, all of us would likely benefit.
This column does not necessarily reflect the opinions of the editorial board or Bloomberg LP and its owners.
This column does not necessarily reflect the opinions of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a columnist for Bloomberg Opinion covering technology. Wall He Street He is a former Journal and Forbes reporter and author of We Are Anonymous.
Parmy Olson is a columnist for Bloomberg Opinion covering technology. Wall He Street He is a former Journal and Forbes reporter and author of We Are Anonymous.
This article is published from the news agency’s feed with no text changes. Only changed the heading.