Privacy Daily is a service of Warren Communications News.

California Frontier AI Report Seeks 'Targeted Interventions'

A long-awaited California Report on Frontier AI Policy calls for “targeted interventions” that “balance the technology’s benefits and material risks.” Gov. Gavin Newsom (D) on Tuesday announced the report's release. A group of academics and other experts that Newsom organized wrote it in the wake of his vetoing a controversial bill last year by Sen. Scott Wiener (D) on AI frontier models (see 2409300011).

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

The California report doesn’t “argue for or against any particular piece of legislation or regulation. Instead, it examines the best available research on foundation models and outlines policy principles grounded in this research that state officials could consider in crafting new laws and regulations that govern the development and deployment of frontier AI in California.”

“Frontier AI breakthroughs … could yield transformative benefits across a range of practical applications in fields including, but not limited to, agriculture, biotechnology, clean technology, education, finance, medicine and public health, and transportation,” it said. “Rapidly accelerating science and technological innovation will require foresight for policymakers to imagine how societies can optimize these benefits. Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms.”

Ground AI policymaking in “empirical research,” using “a broad spectrum of evidence,” it recommended. “The early technological design and governance choices of policymakers can create enduring path dependencies that shape the evolution of critical systems. … Proactively conducting risk assessments and developing appropriate risk mitigation strategies can help integrate safety considerations into early design choices.”

Start by requiring industry to provide information about its AI systems, the report said. “Greater transparency, given current information deficits, can advance accountability, competition, and public trust as part of a trust-but-verify approach.” Transparency may be increased by protecting whistleblowers and third-party evaluators, it added.

Also, the California AI group recommended reporting systems for adverse outcomes of AI. “Existing regulatory authorities could offer clear pathways to address risks uncovered by an adverse event reporting system, which may not necessarily require AI-specific regulatory authority.” Policymakers should set -- and update over time -- thresholds for when interventions like disclosure, third-party assessment or adverse event reporting are required, it said.

California released the report as Congress considers a 10-year moratorium on enforcement of all state-based AI laws (see 2506130027 and 2506120083).

“As [President] Donald Trump chooses to take our nation back to the past by dismantling laws protecting public safety, California will continue to lead the way with smart and effective policymaking,” said Newsom.