FPF Contrasts AI Laws in New York and California
Though New York’s Responsible AI Safety and Education (Raise) Act is largely identical to California’s AI safety and transparency law, SB-53, there are key distinctions between them, said Justine Gluck, a policy analyst at the Future of Privacy Forum (FPF), in a blog post.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
The two laws provide "a useful lens into how states are approaching frontier AI safety and transparency and where policymaking may be headed in 2026," she added. Depending on how they fare in 2026, it's possible that the laws could "serve as reference points for future federal legislation."
One area the laws diverge is in certain aspects of scope. For instance, the Raise Act includes carveouts for universities engaging in research, and only applies to AI models developed or operated in New York. Gluck predicted this would insulate the law against a Dormant Commerce Clause constitutional challenge, should one arise.
While SB-53 contains employee whistleblower protections, the Raise Act lacks them. The New York law, however, establishes a “frontier developer disclosure program,” which requires providing information such as ownership structure. This requirement isn't included in SB-53.
Gluck noted that numerical elements of the laws differ as well. For instance, California has a 15-day window for safety incident reporting, while New York's is 72 hours and uses stricter qualifiers. The Raise Act also carries higher penalties -- up to $1 million for a first-time violation and $3 million for following violations -- whereas SB-53 caps penalties at $1 million per violation.
SB-53 went live on Jan. 1, and “will be the first real test of how a frontier AI statute operates in practice, with New York following shortly thereafter,” Gluck said. The laws are being enacted during a time of “renewed uncertainty over the balance between state and federal AI policymaking,” she added, citing the recent White House executive order on AI (see 2512110069 and 2512240013).
Both laws “could serve as reference points for future federal legislation,” the FPF policy analyst said, with 2026 a test to see whether they "function as models for broader adoption or face legal challenge.”