Meta AI violates the privacy of people who are not on any of the social media platforms. Credit: ShutterStock
Meta Chief Executive Mark Zuckerberg called it “the most intelligent AI assistant at his own whim.” However, Saddleworth's record shop worker Barry Smetherst, 41, came across much less intelligent episodes he described as “terrifying.”
Stuck on a platform waiting for the morning train to Manchester Piccadilly, Barry thought he would rely on Meta's brand whatsapp AI assistant to get help. Instead, he ended up with a personal phone number that he had no rights to.
When the chatbot confidently delivered the Transpennin Express Customer Service mobile contacts, Barry dialed and expected helpful train staff. Instead, a confused woman standing in line from Oxfordshire, 170 miles away, said her numbers were either not publicly disclosed or associated with the transport operator. Humiliation went both ways.
Barry finds it too late to call someone who has no connection to his journey while an Oxfordshire woman has become an unconscious target of frustrated travelers seeking train updates.
And Barry tried again, but the chatbox failed again, connecting him with another private number of non-whatsApp users. Take this opportunity to join James As the Guardian reported in the article, Grey, Oxfordshire real estate executive WhatsApp AI features.
Even WhatsApp violates people's privacy
Meta's WhatsApp AI assistants were literally revealing their private phone numbers for the second time, indicating that even the most confident chatbots are horribly wrong.
The serious mistakes have rekindled conversations about the reliability of artificial intelligence, privacy protection, reliability, and corporate responsibility. Accessed by millions of users, Zuckerberg's AI is positioned as a public bounty as an intelligent helper accessible to all WhatsApp users.
When asked if there was any claim that Zuckerberg's AI was “most intelligent,” Gray said, “In this case, it's been cast on suspicion.”
And for Barry, it proved to be nothing. “That's horrifying,” Barry Smetherst said after filing a complaint with Meta. “If they've made up numbers, that's more acceptable, but the excess of getting the wrong numbers from the databases they can access is especially worrying.”
You need more powerful control
Meta defends assistants as a safe and practical innovation, but internal documents show some of the limitations burned into the model. AI systems often scrape data from the web and internal databases, but assistants aim to filter sensitive personal data. In this case, filtering failed and personal information was disclosed to a stranger.
Meta could have avoided incidents by introducing stronger controls on personal data, reviewing answers before release, and flagging answers with uncertainty rather than delivering with confidence.
This is not the first time Meta's AI has stumbled. From chatbots to inventing quotes to suggesting unauthorized behavior, the company tackles the fundamental challenges of trust. Rather than inventing details, it's just as important as building a model in the first place, either to train an algorithm that says “I don't know” or to properly anonymize personal information.
Meta's AI helpers work in the context of high stakes. WhatsApp supports billions of people in the daily lives and helps them stay in touch with their friends and family. It also helps people organize in their communities and helps businesses serve their customers and customer service.
The last thing people want is for AI assistants to quietly share their personal phone numbers or other sensitive information. The company is currently pressured to strengthen its filters, audit data access points and commit users to being safe and non-negotiable.
Increased regulatory pressure
META and other AI language models, including GPT-3, CHATGPT and others, face increased regulatory pressures. All over Europe and the UK, lawmakers are discussing frameworks such as Digital Services Act and AI Act.
Even in the US, privacy advocates are pushing for clearer rules on data collection and responsibility, but in the blockchain space, Web3 natives are pushing for data decentralization. They advocate for returning data control to users, allowing them to be stored, protected and shared as needed, and can monetize it if they wish.
When private numbers find a way to public conversation, mistaken AI assistants will also put more pressure on staff to take action, as consumer activists adopt rules that protect end users.
Meanwhile, Barry simply wanted the train to be updated. His morning commute became a warning tale of AI's hubrism and privacy failures.
