Lawyers Look to Privacy Laws as Building Blocks for AI Regulation
Privacy laws in the EU and at the state level in the U.S. serve as a basis for building an AI regulation regime, Morrison Foerster’s Marian Waldmann Agarwal and Marijn Storm said during a webinar about the intersection of privacy and AI last week. The partners discussed recent AI regulatory developments and their intersection with privacy obligations.
Sign up for a free preview to unlock the rest of this article
“When handling personal information in connection with AI, there are other requirements that may be triggered based on the state's privacy laws,” said Waldmann Agarwal. If something isn't explicitly mentioned under AI guidelines or governance, it doesn't mean it's not regulated under an existing privacy law, she added. The most important thing is to “consider [privacy and AI laws] in conjunction," Storm said.
Storm gave an example in a joint interview Monday with Waldmann Agarwal. “The rule in European law, normally, is that the specific law takes precedence over the general law," he said. "For example, if you're within the ambit of e-Privacy, and you make sure you don't go outside of it, you only need to worry about e-Privacy, and you can forget about GDPR. And the moment you go outside the bounds of e-Privacy, you go back into the general law of the GDPR.”
While on the whole, AI regulation often crafts language that fits its specific purposes, Storm said occasionally language is borrowed from privacy laws. “For example, the GDPR calls sensitive data ‘special categories of personal data,’” he said. “Then the AI Act works on top of that, and says, ‘well, actually, that prohibition of the GDPR that you cannot use sensitive data doesn't apply when you're using sensitive data to debias an algorithm.’ And then they make sure to both refer to special categories of personal data so you know they are referring to the same thing.”
On the state level, even though specifics vary by state, privacy laws are applicable to personal information, Waldmann Agarwal said on the same call. “If the AI is using the personal information, you have to consider how the privacy law is going to impact it.”
“Various regulators have said ‘we have existing laws in place,’" she said. Some of the laws are "state consumer privacy laws, some ... are not necessarily privacy laws, they're consumer protection laws, like the FTC Act, or other protections, like the wiretap laws. They're not geared toward personal information necessarily, they're geared toward protecting some other harm." However, "if you're using AI, that doesn't exempt you from those laws... . People have to understand that you still have to comply with all of this existing regulation, and AI is not necessarily seen as something new and separate, it's layered into that.”
If there's specific reference to data protection and privacy within AI regulation, at least in the U.S., “it happens, because the things that the AI laws are intended to protect against are harms to individuals,” Waldmann Agarwal said. “At that point it becomes a privacy concern.”
Looking to the future, “we're definitely [going to] see more state laws," she said. "It'll be interesting to see whether or not Colorado's AI act influences the direction that the state laws go in, if they become a little bit more conscious of AI as part of the privacy side of the law." Waldmann Agarwal continued, "Right now, for the state privacy laws, there's this concept of profiling, which is an automated processing [and] how it pulls in AI currently, but we'll have to see if maybe the Colorado [law] will influence these laws to be a little bit more explicit about AI and what can be done.”