Software Companies Urge New York State to Tap Brakes on AI Bills
New York's Senate rushed through AI legislation without taking stakeholder feedback into account -- favorable or unfavorable, the Business Software Alliance (BSA) said Friday. Meanwhile, the Software Information Industry Association (SIIA) said it’s dissatisfied with recent changes to one of the bills.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
On Thursday, senators passed the New York AI Act (S-1169), aimed at regulating high-risk AI with enforcement from the attorney general and through a private right of action (see 2506120087). Additionally, that day, the New York legislature supported the Responsible AI Safety and Education (RAISE) Act. That bill (S-6953/A-6453), which SIIA opposed previously as "premature" (see 2505300042), seeks to regulate the training and use of AI frontier models.
The RAISE Act will go next to Gov. Kathy Hochul (D). The New York AI Act still needs approval from the Assembly, whose version of the bill (A-8884) is pending in the Ways and Means Committee.
S-1169 “would go far beyond what has been enacted in California or the European Union [and] is not ready for serious consideration by the Assembly,” said BSA. “It conflates the roles of different actors along the AI value chain, holding companies legally responsible for actions that may be taken by others. It also establishes an extensive and unworkable third-party audit regime and fragmented enforcement through private lawsuits.”
However, the Electronic Privacy Information Center applauded Senate passage of the New York AI Act. EPIC said the bill "prioritizes transparency and accountability in AI development and use and gives people meaningful rights when AI is used to make high-stakes decisions about them."
Meanwhile, BSA said it planned to share its concerns about the RAISE Act with Hochul. While “intended to address worthwhile considerations around AI safety for large frontier models, it relies on a vague and unworkable incident reporting scheme,” the trade group said. “The bill would also undermine safety protections for frontier models by requiring developers of those models to publish their safety protocols -- creating a roadmap for bad actors to exploit.”
Recent edits to the RAISE Act “are improvements but do nothing to address the fundamental concerns we have raised,” said SIIA. The way the bill defines “critical harm” and “safety incident” would make it “literally impossible” for AI developers to implement, the software industry group said. “This is because AI models of the type covered in this bill are intended to be used and modified downstream; in fact, that adaptability is one of the fundamental reasons for open source AI models."
“Developers can convey their intentions about how models should be used, and they can build-in safeguards, and US-based developers are doing that,” SIIA said. “But it is not possible to track, assess, and evaluate all downstream uses and to prevent any downstream uses that may be unintended or may circumvent safeguards.”