Privacy Daily is a service of Warren Communications News.
'Chatbot Is Not Judgmental'

Consumers Seek Mental Health Help From LLMs That HIPAA Doesn't Cover

People are increasingly using general-purpose AI chatbots like ChatGPT for emotional and mental health support, but many don’t realize that regulations like the Health Insurance Portability and Accountability Act (HIPAA) fail to cover these sensitive conversations, a Duke University paper published last month found. Industry self-regulation seems unlikely to solve the issue, which may disproportionately affect vulnerable populations, said Pardis Emami-Naeini, a computer science professor at Duke and one of the report’s authors.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

"Participants conflated the human-like empathy exhibited by LLMs [large language models] with human-like accountability and mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist,” the report said. “Participants often viewed LLM chatbots as more accessible and less intimidating than professional care, yet they underestimated or misunderstood the data handling risks. Many users expressed a desire for stronger safeguards but felt ill-equipped or insufficiently informed to protect themselves” due to “regulatory gaps and usability challenges.”

Meta AI users posting what's typically private information for everyone to see on the app has raised questions about whether they know when they’re sharing AI queries with the world (see 2506120082). Meanwhile, Fast Company reported recently that Google indexed many ChatGPT conversations containing personal details, including mental health struggles. Also, last week, Illinois Gov. JB Pritzker (D) signed a law prohibiting AI therapy.

The Duke researchers conducted 21 semi-structured interviews with U.S. participants about their use of general-purpose LLMs that aren’t specifically marketed for mental health purposes. The researchers tried to achieve sociodemographic diversity, including through factors like gender, race and level of education, Emami-Naeini said in an interview with Privacy Daily. They stopped after 21 participants because they reached a “saturation point” where they no longer were seeing new themes or findings “coming by just having another participant,” she said.

“We had participants who were using this whenever they were stressed” or “they were struggling with depression,” Emami-Naeini said, adding that they liked that the AI confirmed their emotions and struggles. When individuals are “struggling with something” and in a “vulnerable situation,” they “really want something to tell [them] that [they] haven't done anything wrong.”

Roughly five female participants “were going through domestic violence, and they were using this chatbot to basically just share their feelings because they didn't have anyone else to talk to,” Emami-Naeini said. The women feared “talking to a human therapist because they didn't want to … raise a red flag in front of their abusive partner,” and didn’t want to use a chatbot designed for mental health because their partner might see it on their phone, she noted.

Some participants who identified as LGBTQ+ said they used the chatbots heavily because “they were saying that there is a stigma around this topic in” their families, and the AI chats were the only way they could get support. “When you look into more minoritized, marginalized population[s], probably you would see a higher … subset of them using this chatbot because they were saying that it's not judgmental,” said Emami-Naeini. “People can be judgmental. [A] chatbot is not judgmental.”

Users Tried to Protect Themselves

Participants did say they worried about oversharing with the chatbots, but at the same time, they struggled with knowing how much they would have to share to get help, said Emami-Naeini. “So they ended up [oversharing] anyway because they wanted the help.”

Some more highly educated participants raised more privacy and security concerns than others, but they still used the chatbots, she said. One or two participants said they tried to read the privacy policy of the LLM they were using, “but we all know that privacy policies are not readable.” Most participants didn’t read the policies, which she said also makes sense because people who need mental health help right away are probably less likely to scan such legal terms before diving in.

A few participants attempted to protect their privacy by using third-person narratives when talking to the chatbot, acting as if they were asking questions for a friend, said Emami-Naeini. Others said they tried avoiding including personally identifying information in interactions.

However, it’s “a bit hard to make sure that you're really [protecting yourself], because whatever you're sharing, there's a really good chance that something very sensitive can be inferred … especially when you are combining this interaction with all the other interactions that you've been having with the chatbot,” said Emami-Naeini. Plus, a person’s chatbot account already contains identifying details, yet many participants didn’t realize that they had already shared some personal information with the technology, she added.

What’s happening on Meta AI, with users seeming to be comfortable sharing sensitive information, aligns with the Duke study, Emami-Naeini said. The chatbot is “making them feel [that] they are being protected” and “that no one else is here.”

Existing Laws, Disclaimers Insufficient

A majority of participants in the Duke study “thought that the conversation is being protected by the same regulation that would protect mental health conversations with a health therapist,” said Emami-Naeini. “They thought it was going to be protected by HIPAA, and they were surprised when we told them” differently.

She noted that the Food and Drug Administration has approved only “a couple” chatbots designed for mental health therapy. Unless a doctor or health insurance company recommended the bot, HIPAA won’t protect the conversations, she added. Regardless, the study participants were all using general-purpose LLMs like ChatGPT and Google Gemini that HIPAA doesn't cover.

The report noted that while the California Consumer Privacy Act (CCPA) allows opting out of some data sales, “it presumes individuals possess the awareness and motivation to protect themselves. Such self-management tacitly permits extensive data use unless users proactively intervene."

While LLM providers, including OpenAI and Microsoft, say upfront that their tools aren't meant to replace mental health professionals, these disclaimers aren’t always sufficient, the paper added. "Even if disclaimers distance general-purpose LLMs and their developers from formal ‘provider’ status, the ignoring of this inevitable mental health usage risks invitation of reputational damage and regulatory scrutiny."

At the very least, said Emami-Naeini, there should be better transparency with consumers about how chatbots function and what risks they create. Former President Joe Biden’s executive order requiring labels on consumer IoT devices could be a good model for how to increase transparency, she said. “I don’t have a lot of hope for industry self-regulation.”

The paper suggested that AI chatbots adopt a “harm-reduction framework” that would include providing “contextual nudges and just-in-time warnings.”

For example, when LLM systems detect "language indicative of significant emotional distress or mental health discourse, it should prompt users with a gentle warning: 'We are not a licensed therapist; for confidential crisis support, click here.'" Some platforms do this, the study noted, but it may not happen 100% of the time, and it’s not standard practice across all LLMs.

The Duke researchers additionally recommended setting privacy protections high by default and storing chat logs only briefly. The latter practice “would especially benefit developers by reducing liability tied to large-scale retention of sensitive data,” they noted. “Users could opt in to longer-term storage for convenience, but privacy-by-default avoids placing undue friction on individuals in distress.”

“Adopting these harm-reduction strategies not only protects users but can also serve the long-term interests of AI providers,” the paper said. “Although disclaimers may reduce immediate liability, they will not shield companies from reputational damage or legal action if leaked or mishandled mental health disclosures spark public outrage or lead to demonstrable harms."