Groups See Need for Clearer Definitions in State AI Proposals
Legislators in states like Texas, Connecticut, New York and Massachusetts can set the tone for privacy-related AI laws in 2025, stakeholders told the Multistate AI Policymaker Working Group during a public feedback session Monday.
Sign up for a free preview to unlock the rest of this article
Connecticut Sen. James Maroney (D) and Maryland Sen. Katie Hester (D), members of the working group steering committee, heard feedback from the AFL-CIO, Workday, TechNet, Software & Information Industry Association (SIIA), Palo Alto Networks, Consumer Federation of America and Abundance Institute. Google, Microsoft, Amazon, Center for Democracy and Technology and Consumer Reports were also scheduled to testify during a session that the Future of Privacy Forum co-hosted.
AI and data-driven technologies are rapidly transforming workplaces, and the technologies can hurt workers in many ways, including through surveillance, privacy abuse, discrimination and erosion of labor rights, said Amanda Ballantyne, AFL-CIO director-Technology Institute.
Ballantyne supported including a private right of action (PRA) in bills to incentivize responsible technology development and mandatory consultation with workers through collective bargaining or advisory committees. She noted New York’s S-1169 and Massachusetts’ SD-838 have PRAs and urged Maroney and Texas Rep. Giovanni Capriglione (R), another steering committee member, to add those and additional worker rights to their AI bills.
State AI legislation should require risk-based guardrails, impact assessments and clearly delineated company roles for the AI marketplace, said Evangelos Razis, Workday senior manager-public policy. He noted the Colorado AI Act (CAIA) and the proposals in Texas and Connecticut have incorporated many of these elements. Colorado Sen. Robert Rodriguez (D), a steering committee member, introduced the CAIA. Razis credited Virginia for including straightforward definitions on consequential AI decisions in HB-2094, which Virginia Del. Michelle Maldonado (D), also a steering committee member, introduced. A Virginia House committee advanced HB-2094 on Monday (see 2501270030).
TechNet and SIIA argued for clearer definitions in AI legislation. It’s of “utmost importance” that states move forward in a consistent and predictable manner on comprehensive AI legislation, said Christopher Gilrein, TechNet executive director-northeast region. He urged that lawmakers focus on “known risks.” TechNet is concerned about the inclusion of general purpose AI in some proposals. He told lawmakers to avoid overly general terminology and heavily burdensome disclosure of proprietary information.
The proposal in Connecticut, compared to the legislation in Texas, has greater precision on definitions for “algorithmic discrimination” and “consequential decisions,” said Bethany Abbate, SIIA manager-AI policy. SIIA has concerns about Connecticut’s disclosure requirements related to technical documentation, where companies would potentially share trade secrets with the state attorney general and third parties.
A PRA is the “gold standard” for enforcement, said Ben Winters, Consumer Federation director-AI and privacy. Also, he urged state attorneys general to create divisions focused solely on privacy and AI, as Texas has done under AG Ken Paxton (R). Though Texas lacks a PRA in its privacy law, it's demonstrating “good privacy enforcement,” said Winters: Paxton has a privacy division with “resources and money and attorneys” who enforce. “That’s easier said than done when a lot of states are strapped” for funding. He noted that the Texas AI proposal and the EU AI Act include a list of AI technologies that can be banned. Both measures ban AI-related social scoring and AI assessments of people based on social behavior or personal traits.