AI Could Drive Future Privacy Rules, Experts Say
While many believe that privacy laws can serve as a basis for AI regulation, AI may actually help determine privacy rules instead, said privacy professionals on a webinar hosted by George Washington University Law School Professor Daniel Solove on Wednesday.
Sign up for a free preview to unlock the rest of this article
“A lot of the problems with AI and privacy are problems that the existing privacy laws should be dealing with, but are actually very ill-equipped for dealing with,” said Solove, a professor at George Washington University. “A lot of these problems play on a lot of the weaknesses of the existing model of privacy laws, and that model has been to try to give individuals various rights in their data, and try to somehow give individuals control over their privacy… it's too complicated for individuals to ... control, and ... AI really drives this point home, because AI is so complicated.”
Alexei Klestoff, attorney at ZWillGen, agreed. “It's not that AI doesn't fit in privacy laws,” he said. “Privacy law doesn't fit in AI.” One example is data scraping, which has been around for a while, he said. There wasn't much regulatory guidance or discussion about scraping's privacy implications, Klestoff said, until AI models used it for training.
That said, an understanding of existing laws and regulations, like with data privacy, to decide how to move from here is important, said Leila Golchehreh, co-founder of Relyance AI. “2025 needs to be the year of observability,” she said. “We are facing a moment in AI that is showing that things are going to change rapidly; that it is going to be very difficult for us to keep up from the regulatory and legal front.” It’s critical to “understand how we can build and measure an observable program that's going to allow us to act no matter what the changes are in technology, and no matter what the changes might be in the regulatory landscape.”
Solove said that concepts like transparency, notice and consent have been relied on as touchstones in privacy law, “but they don't solve the problem.”
He added, “AI really shows that we need not just stronger privacy laws, but a pretty radical new direction, a new dimension to privacy laws that is going to be a very different kind of thinking about it than we typically do." An issue with AI is that people "can't really assess what the risks are, [they] don't know what data could be inferred," so their "consent is not going to be meaningful" if "they don't really understand what they're getting themselves into." This “blows up a lot of the existing frameworks that privacy law is built on.”Klestoff said in the wake of a deregulatory era at the federal level, “states are [going to] step up.” He anticipates “it's going to be the year of the state AG.” For example, “We've seen California and the [California Privacy Protection Act] be really active in privacy enforcement.” Texas, too, said the attorney: This duo is “about as red and blue as you can get.”