Privacy Daily is a service of Warren Communications News.
The ‘Cost of Inaction’

Cato Panelists See Mostly Positive Outcomes of Avoiding State AI Patchwork

Enacting a federally imposed 10-year moratorium on enforcing state-level AI laws (see 2506120083) wouldn't necessarily end the debate about the benefits and problems of regulation, panelists indicated during a Cato Institute event Thursday. While they agreed a bevy of state laws would blunt AI innovation and prompt legal challenges, their views varied about how best to protect citizens from potential AI harms, including privacy risks, and still stimulate innovation. In the end, even Congress' moratorium won't end confusion over AI regulation, one panelist said.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Joshua Levine, research fellow at the Foundation for American Innovation, said his recent analysis shows that "the opportunity costs of over-regulating, or misregulating, AI broadly ... are all the benefits and future products and services that we will forego and not have access to as Americans.” As such, "that potentially means" less innovation and the loss of leadership "on the technological frontier.”

Another issue is that training and utilizing AI systems occurs across state lines, meaning that regulations varying from state to state can cause confusion and frustration, he said. For example, some states -- like Colorado, which passed a landmark AI bill last year (see 2412160042) -- take an approach that requires auditing, algorithmic assessments and testing AI tools before they’re available for use, he said. On the flip side, states like Utah give AI innovators more flexibility.

Tatiana Rice, director of U.S. AI legislation at the Future of Privacy Forum, said she would like an analysis pitting the benefits of regulation against the cost of inaction. "What is the cost of using existing regulatory structures" to address AI governance?

While Rice acknowledged, "most people agree that things like tort law, contract law, consumer protection, civil rights, data privacy," apply to AI, it's "wildly unclear ... what that actually means in practice and implementation." This lack of certainty in the business community has "financial and economic risks associated with it.”

Matt Mittelsteadt, a technology policy research fellow at Cato, said the major issue with misguided regulation of AI is the cost of not using the technology to handle things humans don't always do safely, like driving, for example.

There's a cost to foregoing safety if an "excessively regulated state" slows or halts the development of driverless cars, which could replace distracted or tired drivers, Mittelsteadt said.

Rice said she's concerned about the legal system should AI lack clear regulation. “The courts are not good tech-policy venues," she said. "In the absence of explicit legislative standards ... and risk-mitigation measures ... the courts will just make it up,” she said. “And that isn't, oftentimes, the best mechanism.”

“I wonder if, even if we don't have a patchwork of state laws, we will still end up having a patchwork of case law, a patchwork of different types of tort law and data privacy law,” she continued. However, there are ways "to address both of these in a more uniform way.”

As such, Levine suggested states could regulate less if they used education "to ensure our kids are not only able to use and deploy" AI "to serve them so it empowers them ... but also that we have people who are able to build the next generation of these technologies.”

Another example Rice mentioned was state-level engagement in workforce development. One of the best ways for state lawmakers to counter the expected loss of jobs to AI is "to create programs and incentives and investments" to "train this next generation, not only to capitalize on AI and the way that it's really going to transform our society, but also making sure that they have a good idea of” how to “make an AI system that ... follow[s] the data privacy law that we already have in our state.”

The panelists varied concerning the 10-year moratorium. “There is certainly an opportunity cost that we also create by going to the federal level,” Levine said, citing concerns about what happens when lawmakers compromise. Still, “a federal standard here would be much more preferable than having a patchwork of state laws.”

Mittelsteadt agreed about avoiding a patchwork, but also said “success is going to be a function of having carve-outs, like still enabling us to enforce laws on criminal activity.”

Rice said the moratorium might not be a cure-all, citing her earlier worry about a patchwork of court decisions. “No matter your [party] affiliation," a moratorium would create a "backlash.” She said state policymakers are already moving quickly to ensure their laws are enacted, with a plan to “fight it out in the courts later … so [a moratorium] doesn't [necessarily] solve the problem of creating some level of consistency.”