New York Will File Frontier AI Model Legislation Similar to California's
New York State Assemblymember Alex Bores (D) plans to file legislation that regulates frontier AI models later this month, with concepts similar to a bill vetoed in California last year.
Sign up for a free preview to unlock the rest of this article
State legislators around the country are preparing bills like California’s SB-1047, but the New York legislation will differ in “pretty substantial ways,” Bores told us in December. His office said Wednesday it's “fine-tuning” bill language, and he expects to file later this month.
“There’s some role for the government to think about the trade-off” between risks and benefits of large AI frontier models, said Bores. SB-1047 included several legislative “choices” New York should avoid, he said.
SB-1047, written by Sen. Scott Wiener (D), included civil penalty authority for the state attorney general. It would have required large AI developers to establish explicit protections for preventing critical AI-related harms.
In addition, it would have created a Board of Frontier Models to periodically update the regulations. The nine-member board would include an AI industry official and a cybersecurity expert. Companies would need to show they have established “reasonable administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of” covered models and derivative models.
Gov. Gavin Newsom (D) vetoed the measure in September, saying it would give residents a “false sense of security” because it focused on the most expensive and large-scale models (see 2409300011). Newsom previously warned state lawmakers against overregulating AI technology (see 2405300064). Wiener called the veto a “setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.” He later said that California legislators will continue pushing to regulate the technology, despite the result in 2024.
Meanwhile, Bores on Tuesday introduced A-768, a bill with state-enforced civil penalties for companies engaging in AI-related discrimination (see 2501070076). Sen. Kristen Gonzalez (D) introduced AI discrimination legislation in the state Senate Wednesday. New York's legislation has similarities with comprehensive AI measures approved in Connecticut and Colorado.
Bores and representatives from the NewDEAL Forum's AI Task Force joined a conference call with Privacy Daily in December. Bores is a task force co-chair, along with Albany’s Chief City Auditor Dorcey Applyrs and San Jose Mayor Matt Mahan, whose jurisdiction includes Silicon Valley. The coalition is focused broadly on AI regulation, exploring everything from municipal use of the technology and election integrity to police applications and constituent services. Bores, Applyrs and San Jose’s Deputy Chief Information Officer Smita Bhattacharjee discussed 2025 priorities during the call. They said states and cities will continue leading in regulating AI technology in the absence of congressional action.
“We’re not always waiting for the federal government to come up with these decisions before we start acting,” said Bhattacharjee. She noted San Jose is grappling with questions about data privacy and the use of AI technology in city vehicles to detect blight, obstructions and other resident issues.
Applyrs said city law enforcement can use AI to detect gunshots for more immediate police response. Municipalities are finding “tangible” applications that are not largely discussed at the federal level, she added.
Bores, Applyrs and Bhattacharjee agreed the federal government can help by offering standard definitions for AI-related legislation. Policymakers are looking to the National Institute of Standards and Technology in particular, said Bores. NIST in July issued four publications on the safety and trustworthiness of AI systems in support of President Joe Biden’s executive order on AI. One of the reports lays out a “deliberately broad” plan for global engagement on promoting and developing AI standards.