Privacy Daily is a service of Warren Communications News.
‘Just Nuances’

Privacy Engineers Aim to Push Through Legal Uncertainty as Congress Weighs AI Moratorium

SANTA CLARA, Calif. -- Privacy engineers should put their heads down and forge ahead with AI governance initiatives regardless of what’s happening in Congress, at the state level or elsewhere, said panelists Tuesday at the USENIX Privacy Engineering Practice and Respect (PEPR) conference. Legal uncertainty may just be a fact of life for the privacy practitioner, they said.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

There will be uncertainty about AI regulation regardless of what happens with the proposed 10-year moratorium on states regulating AI, said Debra Farber, head of privacy engineering at Lumin Digital. “We should still be putting programs in place that are risk-based and … trying to protect people from the harms that are known.”

“Who the regulator is going to be -- whether it’s state or federal -- really is just nuances, right?” Farber added. “That might make the legal team's heads swirl, because they don't know how … to anchor those requirements in our legal system. But ... we've been doing this for a long time. If you've been in privacy operations and privacy engineering … just apply first principles” and “it’s going to get you like 95% of the way there.”

In general, regulators want to see that a company is "proactive" in applying protections, is aware of the risks and technically competent, Farber added.

Masooda Bashir, a University of Illinois professor, advised that, during this uncertain period, perhaps it's best to build "a lot of what-if scenarios, and ... lots of community and open-source tools that can help with some of these things.”

Bashir added, “Because no matter what, we are going to have some kind of compliance or some kind of guidance down the road … from different parts of the world or within the U.S. -- maybe delayed a little bit.” Regardless, "We're still going to need tools. We're still going to need strategies, and this is a good time for us to test some of those ideas out.”

Uncertainty around AI regulation is reminiscent of what happened with privacy, said Hoang Bao, Axon data privacy officer. When the EU’s General Data Protection Regulation rolled out, “there was a lot of confusion” about what it would mean, when it would take effect and how businesses would comply, he noted.

“I see that same wave of confusion, frankly, for a lot of AI governance practitioners,” who are working in a world where Europe just enacted an AI Act and Congress is considering stopping enforcement of all states' AI regulation, said Bao. “Imagine if you're an engineer working in that environment. How are you going to get clear requirements?”

AI, Privacy Overlap

It's sensible to involve privacy engineers in managing AI risk, said Sarah Lewis Cortes, team lead of the privacy workforce working group at the National Institute of Standards and Technology.

For example, it’s common to use AI in customer support, where speech-to-text technology often feeds into conversational analytics, she said. “But speech to text has a lot of notorious privacy problems,” and may even violate some states’ wiretap laws, said Cortes. “And so … using a privacy engineer who already has that background … can be critical in ensuring that the entire workflow works.”

Akhilesh Srivastava, privacy technology and program leader at the Institute of Operational Privacy Design, agreed. “Privacy and AI risk could be considered in a single track.” A recent TrustArc study found that those who are competent with privacy are most likely to be able to handle AI well (see 2506100059).

AI carries privacy risks because it requires “a large amount of individual-level training data,” said Laura Book, Snap privacy engineer, who spoke later at PEPR. Memorization is a concern, she added -- does the AI model “somehow contain the training data in ways that are disallowed by our regulatory requirements?” Another issue is that the model could be used to reidentify users.

Privacy-related incidents of harm are probably underreported due to reporting bias, said Megan Li, a student at Carnegie Mellon University, in another talk. For example, "sensational" incidents, "such as deepfake impersonations … may be overreported, while incidents dealing with the design or development of Generative AI," such as those involving "data acquisition or the implementation of safety guardrails, may be underreported.”

Li and Wendy Bickersteth, another Carnegie Mellon student, presented their recent paper analyzing nearly 500 reported incidents of AI harm, collected from AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) and the AI Incident Database.

The Carnegie Mellon researchers found the most common type of harm overall involved people’s autonomy, "followed by political and economic harms, and the single most prevalent harm was impersonation or identity theft,” said Li. While further down the list, privacy harms were “generally incidents of exclusion or disclosure,” she said. Those were cases where people were “excluded from decisions considering their own data, or when personal data is improperly revealed or shared by Generative AI.”