Privacy Daily is a service of Warren Communications News.
Privacy Implications

New York Proposes Comprehensive AI Bill With AG Enforcement

A comprehensive New York bill to regulate AI surfaced ahead of the state’s legislative session that opens Wednesday. The Assembly referred A-768 by Assemblymember Alex Bores (D) to the Consumer Affairs and Protection Committee.

Sign up for a free preview to unlock the rest of this article

The New York bill aims to prevent businesses from using AI “algorithms to discriminate against protected classes.” The state AG would have exclusive authority to enforce the bill, which has no private right of action. For one year from the proposed Jan. 1, 2027, effective date, the bill would require a 60-day right to cure for businesses.

The bill defines algorithmic discrimination as “any condition in which the use of an artificial intelligence decision system results in any unlawful differential treatment or impact that disfavors any individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, English language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected pursuant to state or federal law.”

Starting Jan. 1, 2027, deployers of high-risk AI decision systems “shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination,” says the bill: In any complaint brought by the state attorney general, the deployer would have a rebuttable presumption if they complied with the bill’s risk management section and an independent third party completed bias and governance audits for the AI system.

Also, deployers would have to notify consumers before using a high-risk AI system to make a consequential decision and provide a statement disclosing the system's purpose and the nature of the decision. If the decision is adverse, the company would have to explain the reasons for it, how much the AI system contributed to the decision, and the type and source of data that the AI processed to make the decision. In that case, the consumer would be able to correct any incorrect data that was used and appeal the decision for human review.

In addition, the Bores bill would require a deployer to put a clear statement on its website about what AI systems decision systems it uses, “how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise” from deploying each system and, “in detail, the nature, source and extent of the information collected and used by such deployer.”

Colorado passed the first comprehensive AI state bill last year and is expected to consider amendments this session. Connecticut Senate leaders said last month that they would try for a second year to pass a sweeping AI bill. Also, Texas Rep. Giovanni Capriglione (R) prefiled a broad AI bill (HB-1709) Dec. 23.