Caught in AI Riptide, Privacy Pros Weigh Risks and Opportunities
AI is an increasing part of privacy compliance jobs, practitioners said at Privado’s Bridge Summit web conference Thursday. Data protection officers (DPOs) are weighing ethics for AI usage across the business, while privacy engineers are mulling how they themselves might use large language models (LLMs) to automate compliance processes, they said.
Sign up for a free preview to unlock the rest of this article
Many in the field are assuming responsibility for AI ethics, said Steve Wright, CEO of PICCASO, the special interest group for privacy professionals. “It feels like we’re being dragged -- not unwillingly … into that tide.”
Meanwhile, the legal landscape is “changing literally every day, every week,” so companies must “be flexible enough to account for those changes,” said LinkedIn Senior Director-Legal Jon Adams. It’s challenging because if one is telling engineers what they must do in terms of privacy as they build thousands of AI models, “and then the new law comes along, and we have five months to change all of these things incrementally, that's hard to do.”
It can be easier for a privacy practitioner to handle AI governance than it would be for others, Adams said, responding to our question in the webinar’s Q&A. “Privacy professionals are trained to cover a decent portion of the AI governance responsibilities,” including operations issues like educating the organization and pedagogical ones like data management and understanding of the data processing flow, he wrote. “But there are areas where privacy professionals may be under-equipped,” he said, such as on intellectual property risks, environmental impacts and the broader legal landscape.
For London-based automotive insurance company Zego, the “biggest challenge has been … the rapid growth around AI,” said DPO Sara Rudge. AI tools are “easily accessible” and some Zego employees “really want to innovate, so they want to try out these new tools,” she said. “But equally, we need to make sure that we know what's going on with our data, and we need to make sure it's protected.”
Kadir Ider, privacy technology lead for Trendyol, a Turkish e-commerce platform, advised, “Focus on AI machine learning in stages, understand what the business is doing with it, and then try to integrate that into your data protection services.”
Vasudha Hegde, DoorDash senior privacy program manager, stressed the importance of maintaining data hygiene. Companies should set expectations of how data will be used and give users control of their data. Once the data is fed into AI, it’s too late, she said. “It’s not easy to just remove the data and say … 'We’ve unlearned from the data.' It doesn’t work.”
Maintaining “transparency by default” is key to making trustworthy use of AI, said Zoominfo DPO George Jones. The company search tool is clear with customers that it doesn’t use their data to train AI, for example, he said.
Everyone generally agrees that trustworthy AI is “fair, transparent, secure [and] accountable,” said BSI privacy consultant Conor Hogan. However, there may be different expectations depending on the region or jurisdiction, he said. “Trust isn’t a nice to have,” added Hogan. “It’s key to unlocking AI’s full potential.”
Another panel covered how AI could assist privacy teams with compliance. “Automation is the way forward,” said Ios Kotsogiannis, Snap privacy engineering manager. Responding to our question in the webinar’s Q&A, Kotsogiannis wrote, “As we leverage automation for the repetitive tasks,” humans on privacy teams “can focus on the most gnarly issues we face,” such as “policy decisions for new regulations [and] building more sophisticated” privacy enhancing technologies.
“It’s a losing game if you’re not using AI,” said Ugur Koc, Privado senior research and development engineer: There will “always be more data collected” and “more AI in our lives.” He continued, “With the scale of AI, this is going to be a huge privacy problem if you are not using automated tools to prevent privacy incidents.”