Tech Group Sounds Alarm About 'Unworkable' New York Bill Regulating Frontier AI
The software industry said New York state legislation (S-6953/A-6453) aimed at regulating the training and use of AI frontier models is “premature.”
Sign up for a free preview to unlock the rest of this article
“While we appreciate and understand the goal of ensuring the safe and equitable distribution of artificial intelligence (AI) innovations,” the Software and Information Industry Association (SIIA) wrote Friday, the proposed law “would create substantial obstacles to innovation, impose unworkable burdens on technology developers, and negatively impact the broader technology ecosystem, both within New York and nationally,”
SIIA sent the letter to the bills’ Democratic sponsors, Assemblymember Alex Bores and Sen. Andrew Gournardes, plus two legislative committee chairs.
Among other issues, SIIA raised concerns with how the legislation defines several terms. “The definition for critical harm has an underlying assumption that developers will be able to anticipate every potential use for a frontier model,” it said. Additionally, “the expansive definition of ‘safety incident’ turns what is styled as a framework to protect against critical harm into a sweeping regulation of frontier models.”
“We are also concerned that requiring developers to obtain affirmative approval before introducing new models will hinder AI development, with a disproportionate impact on small developers who lack resources to navigate complex, and ambiguous, regulations,” SIIA said.