- Character.ai CEO Noam Shazeer, a former Googler who worked in AI, told the “No Priors” podcast.
- He said he feared Google would launch a chatbot.
- Shazeer left the company to start Character.AI, which builds chatbots that can mimic celebrities.
Noam Shazeer, a former engineer at Google Brain and a key figure in the development of the company’s big-language AI technology, said Google has held back from releasing chatbots for years for fear of repercussions if they say something wrong. He also said he was hesitant.
Shazeer, now the CEO of Character.ai, recently appeared on the “No Priors” podcast to talk about his new startup, one of the hottest companies in generative AI. The startup, which has raised nearly $200 million in funding, allows users to use artificial intelligence to converse with virtual “characters” that mimic different personalities, such as psychologist Elon his Mask and Life his coach. I’m trying
Similar to the chatbot ChatGPT, Character.ai’s technology draws on large amounts of text-based information collected from the web as knowledge. OpenAI’s launch of ChatGPT late last year set the internet on fire and sparked renewed interest in generative AI. Microsoft invested billions in OpenAI and began integrating the technology into Bing. This allows users to ask questions and get detailed answers directly within search. Google immediately responded with Bard.
The search giant didn’t need to put itself in this defensive position, Shazeer explained in a podcast interview, explaining that Google had much of the technology in place years ago. Shazeer was the lead author of his Transformer paper at Google, widely cited as the key to today’s chatbots. He co-founded his Character.ai with startup president Daniel De Freitas, also from Google Brain.
De Freitas has a “lifelong mission” to make intelligent chatbots a reality, and Shazeer said he joined Google in 2016 after reading Google’s research paper on language technology. De Freitas saw the potential of building a chatbot using the company’s extensive linguistic research.
“He didn’t get a lot of people. He started this thing as a 20% project,” Shazeer said of Google’s history of allowing employees to spend part of their time on side projects. mentioned a program. “He then recruited an army of 20% of his helpers, who simply assisted him in this system, ignoring their day-to-day work.”
Ultimately, De Freitas created Meena, a chatbot that was publicly demonstrated in 2020 and later renamed LaMDA.
“He built really cool stuff that actually worked, while other people were building systems that just failed,” says Shazeer.
Despite De Freitas’ enthusiasm and support from other staff, Shazeer said Google didn’t think chatbots would gain enough traction to justify the reputational risk.
When asked why Google didn’t release a chatbot, Shazeer said: as soon as you can.
LaMDA was the subject of some controversy last year after engineer Blake Lemoine claimed the bot was intelligent and therefore worthy of human rights. He was eventually fired from the company. Google was also receiving internal backlash from his AI researchers like Timnit Gebru, who warned against publishing anything that could cause harm. Google has spent considerable time training Bard to provide an accepted answer.
Concerns about chatbots are not unfounded. I may give wrong or biased answers to questions. Publishers and other copyright owners fear that Google and Microsoft will use proprietary data to steer traffic away from his website and return information directly within search results. Consumers are also using chatbots to conduct conversations of a sexual nature, which Character.ai expressly prohibits.
Google has reportedly set aside many of its ethical concerns this year because it fears the OpenAI-Microsoft partnership could take search market share. Samsung is considering switching to Bing as the default search engine on smartphones, The New York Times reports.
Watch Now: Top Insider Inc. Videos
Now loading…
