Meta AI Users Widely Share Seemingly Sensitive Chatbot Conversations
Meta AI users posting what's typically private information for everyone to see on the app is raising questions about whether all users understand they’re sharing their AI queries with the world. Users on X posted about the trend this week with many examples.
Sign up for a free preview to unlock the rest of this article
A brief scan of the Meta AI app by Privacy Daily also found multiple examples of seemingly private conversations between users and AI appearing for anyone to read or hear. Two consumer advocates raised concerns Thursday.
In one text conversation that we saw publicly posted on the app, a woman tells Meta AI, “I would like to chat about my current medical condition.” She describes an unplanned surgery and concerns about what it will mean for her ability to return to work.
Another chat we saw had a man disclosing his phone number. We also observed a conversation from what appears to be a child talking to the AI about his girlfriend, teacher and favorite school subject. Some posts on the Meta AI feed include audio of the user interacting with AI by voice.
“It creates a really sad situation for users that aren’t employing constant vigilance, and leaves people that are using the AI system to exchange about sensitive things extra vulnerable,” Ben Winters, director of AI and data privacy at the Consumer Federation of America (CFA), emailed us Thursday. “In an effort to force AI into every part of their massive tech platform, they have a feature of having your interactions with AI being made fully public (not just to your Meta connections).”
“It’s not only bad content on there, but actively harmful,” Winters added. “It’s very manipulative and deceptive to make a conversation-type platform where with one touch your interactions are made public, and that they can also use all that data to target you with ads.”
Grace Gedye, Consumer Reports policy analyst, said, "If many consumers are posting seemingly private interactions and disclosing sensitive information, Meta should make the warning label larger and more obvious."
Chatbot conversations can be quickly shared publicly in the Meta AI app. After the user starts a conversation in the Android phone app, a gray “Share” button appears at the top right of the screen and remains for the rest of the chat. When tapped, the app gives users the option to add a title and then click a “Post” button to make the conversation public.
A blue button at the bottom-right of the screen reads “Talk” and is used to begin a voice-based conversation in the app. If users start a text conversation, the blue “Talk” button changes to an up arrow that users press to send a new message.
Meta didn’t comment on Thursday. We also asked the FTC and privacy regulators in multiple states that have comprehensive privacy laws to comment on potential issues with what's happening on Meta AI. However, the FTC and the California Privacy Protection Agency declined to comment, while attorneys general from Colorado, Connecticut and Texas didn’t respond at our deadline.
Earlier this week, CFA and other civil society groups asked the FTC, all 50 states and the District of Columbia to investigate Meta AI and Character.AI for allegedly allowing “‘therapy bots’ to falsely assert specific licensure, experience, and confidentiality to users with inadequate controls and disclosures.”