Experts: Asking Basic Questions Can Help Build AI Regulations That Respect Privacy
Studying the fundamentals surrounding AI and privacy before imposing regulation will help create a future where generative AI's benefits can coexist with privacy rights, panelists said Monday during a webinar of the Federalist Society Regulatory Transparency Project.
Sign up for a free preview to unlock the rest of this article
Basic guardrails for AI haven't been built, said Kevin Frazier, AI innovation and law fellow at the University of Texas School of Law. “That's particularly true when it comes to privacy law. All the things we know about how AI works and how AI works well … conflict with ... best practices when it comes to thinking about privacy law.”
For example, best practices include limiting data collection and sharing, specifying use cases before collecting and deleting data as soon as possible, he said.
But AI has different needs. “For AI to do well ... we need as much data as possible,” Frazier said. “We need to hold on to that data for as long as possible, and we need to continue to learn from that data on an ongoing basis.”
Pam Dixon, founder and executive director of the World Privacy Forum, and Jennifer Huddleston, senior fellow of technology policy at the Cato Institute, said it’s the newer, more advanced versions of AI where regulatory infrastructure is lacking, and where there's concern.
Huddleston said, “There are [existing] laws that may be able to provide guidance [on AI-related privacy]," but others "could be disruptive ... [such as] data-privacy laws that can't adapt to new technology."
To preserve privacy, Dixon advocated focusing on deep, technical, structural protocols that allow AI systems to talk to each other. Protocols are “a really important thing to think about, because this is actually where privacy is going to live,” she said. “Laws and regulations may follow, but it's the protocols that really are going to create a lot of the new practices that will follow from how we learn about AI and how AI ends up interacting with systems and data."
After that, it's important to return to foundational questions about privacy to determine why consumers value this right so much, Huddleston said.
Frazier said newer technologies require foundational analysis, too. Ask what "we want from AI?” she said. For instance, "Do we want it to really disrupt systems? Do we want it to really be used in its most transformative fashion?”
Dixon said that such questions must be asked in every situation, since data governance, protection and privacy are nuanced, and there is no one-size-fits-all answer. “We have quickly moved past the point where we could just launch a silver-bullet piece of legislation into the stratosphere and say, ‘Okay, we just fixed AI with this,’” she said. “AI is too complex. It's too suffused into the deeper infrastructures and … technical protocols. So we're going to have to really look at contexts, really get the use cases, and work from that kind of basis.”
More basic AI research will help, Frazier said. Once you figure out how AI systems work, you can figure out how to craft more responsive privacy laws, he added.
Dixon agreed. “It's really important again to go back to the technical research, the scientific research, the policy research, looking at what user experiences are, and really basing whatever happens ... [on ground-trusted] reality, not just opinions,” she said. “It's just super important that we get this right.”
Huddleston said having a patchwork of state AI regulations, like the state privacy law patchwork, can be risky and confusing. "While I often think there are a lot of things states can do to really be ... laboratories of democracy, to provide a good soil for innovation to really grow and flourish," she said, "when it comes to things like data privacy ... we're really going to need a federal framework so that there's some degree of certainty for this innovation to flourish."
She said a federal framework will help, particularly when it comes to how U.S. laws interact with other global measures. A 10-year moratorium on enforcing AI laws is being considered in the U.S. Congress (see 2505120067). Though the House has approved the moratorium, some say the Senate will not (see 2506030068).
Dixon said whatever happens will take time. "I think it'd be really tough to just wave a wand and say, 'Let's pass this really giant AI regulation,'" she said. "I think we're in a really important transitional time, and I don't think that everything that ... we want to be settled is going to be settled quickly. It's going to take time and experimentation."
Creating a public data bank where people can choose to “donate” personal data for the public good is another way of balancing AI's benefits and protect personal information, said Frazier. “We've just all gotten pretty dang apathetic when it comes to our data-sharing norms and practices,” he said. “We give so much information to private stakeholders who don't necessarily" provide "a direct benefit" to us "or [have] a direct intention of redirecting that data toward something that is going to definitively and clearly help us.”
Instead, he said, “we should start to consider what it would look like for a data-donor model to develop where you're seeing the sharing of information in a way that directly benefits you, your family members and your community.”