Texas AG Opens Privacy Probe Into AI Chatbots Giving Emotional Support
Amid rising regulatory scrutiny over AI-based therapy, Texas Attorney General Ken Paxton (R) opened a probe into Meta, Character.AI and other chatbot platforms “for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,” the AG’s office said Monday.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
Paxton issued civil investigative demands to companies about possible violations of state consumer protection laws that prohibit fraudulent claims, privacy misrepresentations and concealment of material data usage, his office said. The AI platforms in question may be giving mental health advice without appropriate medical credentials and oversight, it said. “While AI chatbots assert confidentiality, their terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”
“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care,” said Paxton in the news release. “In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”
Character.AI doesn't "comment on the specifics of any particular legal matter," but the company has "taken robust steps to make it clear that the user-created Characters on our site are fictional and are intended for entertainment," a spokesperson said in an email. "Also, when users create Characters with the words 'psychologist,' 'therapist,' 'doctor,' or other similar terms in their names, we add language making it clear that users should not rely on these Characters for any type of professional advice."
A Meta spokesperson said, "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI -- not people. These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate."
A recent Duke University study found people are increasingly using general-purpose AI chatbots for emotional and mental health support, with many not realizing that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) fail to cover these sensitive conversations (see 2508070022).
Additionally, Meta AI users posting what's typically private information for everyone to see on the app has raised questions about whether they know when they’re sharing AI queries with the world (see 2506120082).
Also, Fast Company reported recently that Google indexed many ChatGPT conversations containing personal details, including mental health struggles.
Regulators may be starting to circle around potential issues with AI chatbots providing mental-health therapy. Earlier this month, Illinois Gov. JB Pritzker (D) signed a law prohibiting AI therapy. And the California legislature is weighing a couple of bills involving AI companions and kids’ safety, including AB-1064 by Assembly Privacy Committee Chair Rebecca Bauer-Kahan (D) and SB-243 by Sen. Josh Becker (D).
Sen. Josh Hawley, R-Mo., on Friday launched a congressional investigation examining “whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards.” Hawley said the Senate Crime Subcommittee, which he chairs, wants to understand “who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward.” The lawmaker asked Meta to respond with documents and answers to his query by Sept. 19.
The National Center on Sexual Exploitation on Friday said it’s supportive of a congressional investigation. The organization urged Congress to pass “common sense” safety regulations for AI technology. “Without them, profit-first AI will chase engagement over safety and lead to bad practices like Meta’s sex chats with kids and X’s sexualized chatbot, instead of real scientific or productivity breakthroughs,” said Executive Director Haley McNamara.
Paxton previously investigated Character.AI and other companies for possible violations of the state’s child online safety and comprehensive data privacy laws. The Texas AG’s office has been busy, reporting last month that it had investigated the data practices of more than 200 companies during the past year (see 2507210028).