Experts Tout Risk Assessment as Tool for Preventing AI Privacy Abuses
Risk assessments and other preemptive analyses of AI and privacy systems are the best ways of negating harms before they arise, said panelists during an Electronic Privacy Information Center (EPIC) event Monday about California’s proposed AI and privacy regulations.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
“I think we need to do a bit more work on the definition of risk, because right now, in privacy and AI regulation, risk has been defined as a very high-level thing,” said Gemma Galdon-Clavell, CEO of Eticas.ai, a vendor.
In other fields, risk is “completely different,” she said. For example, in aerospace, risk isn't a plane falling from the sky, it's "that screw 2,015 is put in properly by the right person with the right credentials.”
Galdon-Clavell advocated risk assessments that are "way more technical ... the ins and outs of what goes into the data, how we define algorithms, how we control those algorithms, what kind of agency people have, and specifically where.”
For Swati Chintala, research manager for TechEquity’s Labor Program, risk assessments can be a way for systems designers "to be obligated to ... think systematically about what the potential harms could be and address them before" implementation. Risk assessments also “can create some element of accountability."
Mayu Tobin-Miyaji, an EPIC law fellow, wants to see additional transparency. “If we could see what's in [the assessments], there are researchers that can assess how the risk assessments are done, how the privacy risks are being identified, whether it's sufficient or insufficient, what benchmarks could be created,” she said.
Galdon-Clavell mentioned the lack of dialogue between regulators and the AI industry. “In the past, regulators have found industries that were much more willing to collaborate and contribute" to discussions about trust and safety,” she said. “One very abnormal thing about the AI industry is that they're not willing to do that."
Ben Winters, director of AI and privacy at the Consumer Federation of America, agreed and said there is also a strong lobbying effort to weaken existing regulation.
Even in instances where reasonable consensus bills pass, such as Colorado’s AI Act (see 2412160042), “there are massive million-dollar campaigns from venture capitalists and tech companies to say 'We shouldn't even have to comply with that,'” Winters added. “We're dealing with an industry that has showed pretty open hostility to consumers and people,” he added.
Despite this, he said, risk assessment could be a step in the right direction, at least for "entities that are well-meaning and wanting to do [things in] a better way," Winters said.
The California Consumer Protection Act (CCPA) also has the potential to protect the public and even the playing field, Chintala said. “The CCPA’s original mandate is to ensure that consumers and workers and other categories of people have the information necessary to exercise meaningful control of businesses' use of their data,” she said.
But Galdon-Clavell said that should be a last resort. “Consumer protection should only intervene when everything else goes wrong, so the fact that we're putting all these efforts into this already shows how many steps in the process of building accountability in those systems, we have missed, or we have failed at acting efficiently,” she said. “If things work well, as a consumer, you are protected by default.”