Privacy Daily is a service of Warren Communications News.
‘Layer Cake’ or ‘Onion’?

Augmenting Existing Frameworks Can Help Manage AI Privacy Issues, Experts Say

Managing AI privacy concerns in an organization requires expanding existing frameworks but also increasing collaboration across the business in acknowledgment of AI's wide potential to touch many areas, panelists said during an IAPP webinar on Tuesday.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Privacy professionals have "built robust frameworks ... on [the] principles of data minimization, purpose limitation and data subject rights,” said Gail Krutov, senior privacy counsel at BigID. While these frameworks provide "a solid foundation," they weren't built for AI's "unique challenges," she said.

Aaron Weller, leader of HP's Global Privacy Engineering Center of Excellence, agreed. With AI “it takes a village, and it's not just something that any one team is going to be able to cover,” he said. Organizations must consider the optimal use of their governance teams. “For example, we work closely with our data science teams, our ethics teams [and] cybersecurity." HP has found it useful to add AI governance responsibility to existing roles, "rather than creating all new governance structures,” he said.

Krutov agreed. “It's not just one person who can fill the role” of overseeing AI privacy issues, she said. “It's a collaborative, where you're leaning on the experience of many, many folks across the organization.”

Since there are so many applications of AI, “there is really no one-size-fits-all” approach to privacy oversight and risk-management, Weller said. But taking somewhat of a risk-based approach can help focus organizations on managing threats effectively.

While frameworks differ around the world, elements like transparency and explainability, human oversight and the goal of eliminating bias are commonalities across organizations, Weller said.

Still, he seemed skeptical of the U.S. Congress' proposal of instituting a 10-year moratorium on enforcement of states' AI laws so that the federal government can study and craft a national statute (see 2506230050). “Given the rate of advancements in AI technologies, 10 years is really a lifetime,” he said.

Turning to another issue, Weller said one of the biggest challenges with maintaining AI privacy comes from data silos. As such, a good first step is to identify all stakeholders that need to be involved in decisions about AI use cases. This will help get alignment on questions like: “What does a review of an AI tool mean?” and “What does approval of an AI tool mean?”

The issue is that not only can AI use result in something bad, Weller said, there can also be an opportunity cost of "getting caught up in a convoluted AI review process and never getting out the door.” There needs to be a balance of “traditional risk-management concerns" with a needlessly long review.

Still, panelists acknowledged managing AI and privacy is far from easy. For example, Krutov compared AI privacy risks to onions, in that you must peel back an issue layer by layer. “It's critical to understand that AI systems are dynamic, and so [a] one-and-done assessment simply won't cut it,” she said. “Risks can emerge or change as AI tools evolve, and their usage expands.”

Weller preferred the example of a layer cake. “You don't necessarily need to look at everything every time,” he said. But you can have tools built on top of one another, and "all of these different layers of a layer cake can have different controls.”

“Ideally, you're trying to put in place controls that are as broad as possible, that apply to as many use cases, so you don't need to" custom-build tools and processes "at an individual use case level,” Weller continued.

To avoid rubber-stamping assessments, he said, it’s important to “identif[y] the things we really don't want to happen.” Then, companies can employ “clear guidance on how to get to a yes” that explains their policies and goals.