Privacy Daily is a service of Warren Communications News.
'Core Compentencies' First

Details Still Matter as Privacy and AI Regulation Space Expands, Experts Say

As laws and enforcement continue in the privacy and AI space, companies must pay attention to detail, panelists said during a webinar that vendor TrustArc hosted Tuesday.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

“There's a little bit of fragmentation and convergence happening at the same time, which is a really weird phenomenon,” said Val Ilchenko, general counsel and chief privacy officer at TrustArc. “There are more laws than ever ... with different components and one-upism ... yet at the same time, you're seeing thematic things happening, like universal opt-out mechanisms.”

As a result, companies must consider “core competencies first,” he added. “It's really easy to get distracted by all the new laws, all the webinars, all the logs ... but there are common privacy tasks that everybody needs to do.” These include disclosure, consent management and mapping data.

Mintz's Scott Lashway agreed. “We have been evangelizing privacy for a long time, and our business colleagues, at times, are exhausted by the dialog and the amount of attention that's required,” he said. But “every level of detail is needed, especially when talking about the collection of website data in the use of website technology.” The recent enforcement actions in California against Honda (see 2503120037), Healthline (see 2507030026) and Todd Synder (see 2505060043) highlight this, he said.

Beatrice Botti, chief privacy officer at DoubleVerify, a vendor, said there have also been a lot of new laws on AI. She said some simply saying existing measures apply to AI, such as in Utah, while others are an entirely new AI act, like in Colorado (see 2502030040). Botti also said that local regulators and attorneys general are working to incorporate AI oversight in consumer protection and civil rights work, like privacy rights and combating discrimination.

“Privacy rights, we all know, with AI are some of the most difficult things to do, because in your data goes and then … good luck enforcing data subject rights,” she said. “There's a real question about not only how do you do it, but is it even viable to try to enforce them as a whole from a regulatory perspective."

Are you “going to force a company to delete their entire model, because your data at some point went into it, and it's not even really there?” Botti added.

Lashway said it would be helpful if people spent more time understanding the technology and less "kind of mongering"about "what it could become or what it could do.” He and Botti said that generative AI, as we know it now, isn't the first kind of AI invented, and we can’t get caught up in the dangers of a technology we don’t fully know the scope of yet.

“There is a superficial level of transparency, similar to a superficial level of privacy, and a lot of the conversation is at that level,” Lashway said. “A really deep dive into ... [how] the technology works and how the data is being used” is needed.

Litigation is the risk if the technology is misunderstood or details are not adhered to, he said. “The plaintiffs’ bar [are] skilled lawyers, they're clever, and they're going to test the limits, and the courts in some states will allow them to,” Lashway said.