Your meta AI chat may be published, and it's not a bug

AI News


The conversations people have in meta AI apps are often published even if the user doesn't recognize it, revealing a variety of medical, legal and personal issues. The company's integration with standalone apps and platform-wide artificial intelligence (AI) (Facebook, Instagram and WhatsApp) faces serious scrutiny for such privacy failures.

Over the past two years, generative AI tools such as ChatGpt, Anthropic's Claude, and Google Gemini have been seen explosively. However, with new players entering the market almost every day, not everything deserves the same level of trust.

With 1 billion active monthly users, Meta AI is one of the candidates aiming for the crown of ChatGpt. To monetize this success, Meta CEO Zuckerberg said that “there is an opportunity to insert paid recommendations” or “providing subscription services so that people can pay to use more computing.”

Like ChatGpt, Meta AI can generate text, answer questions, and help users plan, brainstorm and strategize a variety of issues. However, if you use the Meta AI app, users can also press the “Share” button after submitting their question. This can also lead you to a screen that displays previews of posts that users can publish. However, some users seem unaware that they are sharing conversations, audio clips, and images of these texts publicly With the world.

Other users can access “share” conversations using the app's discovery feed.

At a glance at that feed, I found this example of a conversation that users might not want to be published. The teacher shared an email thread regarding future arbitration of job terminations that he felt was unfair. Meta AI app responded:

Examples of teachers sharing stories of unfair dismissals

“Hello {name}, it's great to hear from you again. I can feel the emotional weight of revisiting this experience, and I appreciate your trust in sharing with me. It sounds like a really tough situation and I'm here to hear and support you as you handle it. Hurry, take the time to gather your thoughts, and we can continue the conversation whenever you are ready. How have you prepared for arbitration so far? ”

The emphasis is on me. If AI recognizes how much trust it takes to share that story with anyone, even just a bot, then it needs to feel that users can find and read the whole world. I didn't even have to log in to find this conversation.

TechCrunch called MetaAI a “privacy disaster” and reported on people seeking help with tax evasion and other personal issues. Many sources have found threads on medical topics and other conversations on sensitive and private issues.

What people may not realize is that using the integrated version on social media allows you to log in and connect your conversations to your social media profile.

Your best option is to not use meta AI at all. If you do so, make sure you are not logged in and double check the fact if you are logged in to any of the meta platforms on the same device. And never use the share button unless you mean it.

Meta spokesman Daniel Roberts emailed Wired that users' chats will be private unless they go through a multi-step process.

When asked about these accidents, the Meta AI chatbot stated:

“Some users may unintentionally share sensitive information due to misconceptions about platform defaults and changes to settings over time.”

However, Meta does not provide in-app guidance that is detailed and easily accessible in-app for moments of sharing. This contributes to user confusion. I decided to ask Meta AI.

question: What should I do so that you don't share our conversation with the other world? And is it true that the information I share with you may be used for advertising? Or at least what about “sponsor” recommendations?

answer: Conversations with me are used to improve the model and may be reviewed by the team to ensure quality. You can also use your conversation to train your model to make it more kind and accurate. However, we will take steps to protect your privacy.

If you are concerned about how you use your data, you can visit Meta's Privacy Center to learn more about how Meta processes your data: https://www.facebook.com/privacy/center/.

Did that actually answer my question? no.

So here are the steps you might be able to take if you decide to engage in Meta AI for all the different platforms you might encounter.

Meta AI App

  • To make your prompts and conversations visible to you only:
    • Tap the profile icon for the Meta AI app.
    • I'm going Data and Privacy under App Settings.
    • Select I will manage your information.
    • set Make all prompts visible to you alone.
  • Do not use Share Unless you're sure you want to publish your conversation to the Discover feed, the button.

WhatsApp, Facebook, and Instagram

Note: Conversations with Meta AI in WhatsApp are not protected by end-to-end encryption and can be used for AI training

To limit the use of meta data for AI training:

  • I'm going Settings and Privacy > Privacy Center.
  • Scroll to Privacy sheet pick Select Meta's ai.
  • under Submit a request for objection Click on the message WhatsApp AIS (or any of the other platforms you are looking for), fill out the form and request that Meta not use the data for AI training.

Deleting AI conversation data

Meta has introduced a command to delete information shared in chats with AI.

  • For example, use Type/Reset in a conversation in Messenger, Instagram, or WhatsApp to delete an AI message.

We don't just report on threats – we help protect your social media

Cybersecurity risks should not extend beyond headlines. Protect your social media accounts with MalwareBytes ID Theft Protection.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *