Asking for forgiveness rather than permission is a favorite Silicon Valley business model, from Uber's early days when it expanded into cities without seeking approval from local authorities, to social networking companies' lax handling of user data.
As the AI market booms, the cycle of forgiveness is starting to pick up pace again.
Take Google's latest AI furore: The company published a lengthy blog post on Thursday explaining why its new AI Search — a feature that was automatically enabled for all U.S. users this month and can't be opted out of — is telling users to put glue on their pizza or eat stones.
Google has admitted that its AI search has proven not to be smart enough to recognize the satirical and trollish content that exists on the web, especially in online discussion forums like Reddit, and as a result, the company is now limiting the amount of such content it includes in its AI search results.
Google vice president Liz Reid wrote that the incident “highlights specific areas where we need to improve.”
Also this week, we finally heard from Helen Toner, a former OpenAI board member who was fired in the aftermath of last year's Sam Altman crisis (as you'll recall, the board temporarily fired Altman for not being “consistently forthright” in his role as CEO).
According to Toner, one of the reasons the board lost confidence in Altman stems from the November 2022 launch of OpenAI's most popular product, ChatGPT. Toner claims the board was not informed of the launch in advance and only found out about it after the fact when people were discussing it on Twitter.
None of these cases are catastrophic — hopefully no one was stupid enough to add glue to their pepperoni pizza — but they highlight ingrained habits in Silicon Valley that shouldn't be taken lightly at a time when we're trying to determine how much regulation we should impose on the AI industry, and how much we should allow the industry to self-regulate.
There are signs that tech companies are acting more responsibly: In recent weeks, OpenAI has signed multimillion-dollar deals with publishers including Vox Media. AtlanticOpenAI has partnered with several publishers, including Google, Amazon, and News Corp. The deal will allow OpenAI to train large-scale language models on content from these publishers, rather than simply scraping all that content for free from the web.
Of course, OpenAI is currently The New York Times It has allegedly done just that. Would these content deals have happened if OpenAI hadn't already been sued for its actions?
Alexei Oreskovich
If you have any comments or suggestions regarding the datasheet, please write them here.
Today's data sheet was edited by David Meyer.
Newsworthy
The US version of TikTok. According to Reuters, TikTok is creating a clone of its US-only recommendation algorithm that can operate independently from the Chinese-owned company's main code. Parent company ByteDance appears to have ordered the effort before the US passes legislation that would give it the option to sell TikTok or ban it in the US, but the report suggests that code-splitting could make such a sale easier. The Chinese government opposes a sale and could block the export of TikTok's recommendation algorithm as it stands.
OpenAI reports on disinformation. For the first time, OpenAI detailed how influence operations have been using its AI. In a blog post, the company said it had suspended five influence accounts from groups in Russia, China, Iran and Israel. The company said the groups had used its technology to generate text on social media, including long-winded articles and fake replies, but it appeared to elicit little actual engagement. As NPR notes, OpenAI's move comes shortly after Meta published its own report on a similar issue.
Siri for tapping apps. Bloomberg reports that Apple's soon-to-be overhauled Siri will be able to control in-app features on users' behalf, likely starting with its own apps. The company also reports that Apple has “inked a deal with OpenAI” to build ChatGPT into iOS (another report from earlier this week said the deal had been finalized), but is also in talks with Alphabet to use the company's Gemini AI in the future.
Losing market share. Tencent's WeChat is China's largest mobile payment service, followed by Alipay. According to Nikkei, Chinese regulators have asked Tencent to reduce WeChat's market share in China, but “it is unclear whether Tencent has been given precise numerical targets to achieve.”
Our Feed
“People are [Y Combinator] “Sam Altman was fired. That's not true.”
—Co-founder, Y Combinator Paul Graham The startup accelerator refutes claims that it fired Altman as president because it was prioritizing its own interests. The Washington Post Graham, who first made the allegations six months ago, now says that YC simply gave Altman the choice between running YC or OpenAI, which launched a for-profit subsidiary with Altman as CEO. Some have recently cited Altman's departure from YC as an example of him being accused of being untrustworthy by his employer.
In case you missed it
Trump media company shares soar after fraud verdict: AP
Tesla hits back at adviser who criticizes Elon Musk's stock options: Giving Tesla CEO $45 billion is “ethical” by Marco Quiroz Gutierrez
Surprisingly, some TikTok creators support a ban on the app to protect national security and help people “live in the moment again.” By Alexandra Sternlicht
AI companies are competing for talent on more than just salary packages and GPU access: Their positions on AI safety may start to matter, too, by Sage Lazarus
Meta AI can answer search queries and recommend cocktail bars, but you can't turn it off even if you want to, reports the Associated Press.
Before you leave
Car Thing on Spotify. Spotify's streaming device, Car Thing, is set to be discontinued later this year, but outraged buyers filed a class-action lawsuit against the company over the move this week. Now, TechCrunch reports that Spotify has agreed to refund anyone who bought the in-car gadget during its brief sales window in 2022, provided they have proof of purchase.