- Meta announced it would not launch its AI assistant in Europe after facing backlash from regulators.
- Regulators have raised privacy concerns about Meta's plans to collect user data to train its AI.
- But Mehta says data is essential to delivering a useful product.
Meta takes a “my way or get my way” approach with its generative AI products.
Meta said on Friday that it won't release its AI features in Europe because the EU's strict privacy regulations would make them a “second-class experience.”
In a press release, Meta said it was “disappointed” that the Irish Data Protection Commission (DPC) had asked it to “delay the training of large language models (LLMs) using public content shared by adults on Facebook and Instagram,” calling it a “setback for innovation in Europe.”
“We are committed to bringing Meta AI and the model that powers it to more people around the world, including in Europe. But simply put, without including local information, we can only offer people a second-rate experience, which means we are not able to launch Meta AI in Europe at this time,” the company said.
European regulators Everyone was sad about it. In fact, they welcomed the announcement.
“The DPC welcomes Meta's decision to suspend its plans to train large-scale language models using public content shared by adults on Facebook and Instagram across the EU/EEA,” the commission said in a press release on Friday, referring to the European Union (EU) and the European Economic Area (EEA).
Meta has faced increased scrutiny in Europe over its plans to use public content from its apps to train AI. Earlier this month, European advocacy groups called for an end to Meta's data scraping plans, saying the company would use “years of personal posts, private images or online tracking data” to train its AI.