Privacy Daily is a service of Warren Communications News.
'On Record'

Rodriguez’s Update to Colorado AI Act Clears First Committee

Colorado's Senate Business Committee on Thursday voted 4-3 to advance a proposal from Senate Majority Leader Robert Rodriguez (D) to update the state's AI law. The bill advanced with four amendments to the Senate Appropriations Committee.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Rodriguez said during Thursday's hearing that he borrowed bill language from the federal AI moratorium proposal by Sen. Ted Cruz, R-Texas.

The committee kicked off the August special session by considering SB-4, the Colorado AI Sunshine Act, introduced by Rodriguez and Rep. Brianna Titone (D). Several industry groups testified that the bill, which would update Colorado's comprehensive AI law by the same authors, needs additional revisions, particularly on broad definitions and terms for AI deployers and developers.

The House Business Committee was scheduled to hear testimony later Thursday on alternative AI bills by Rep. Ron Weinberg (R) and a bipartisan group including Rep. William Lindstedt (D). In addition, the Senate State Committee was scheduled later the same day to consider a proposed repeal of the Colorado AI Act by Sen. Mark Baisley (R). Rodriguez and Titone rallied for the bill at a press conference Wednesday (see 2508200025).

Rodriguez told the hearing that his office borrowed the definition of automated decision-making that Cruz included in his moratorium proposal. “It’s kind of weird when we don’t like this broad definition, but we would follow federally,” he said. “I just want to get that on record.”

The majority leader said he remains open to tweaking language in the legislation, helping make the law narrower in scope. He asked if developers should be held liable if a deployer uses their AI technology and there’s consumer harm. He noted that harmed parties often want to hold the deployer responsible even though the developer created the technology.

Colorado Technology Association Director Adams Price responded that the developer doesn’t have a relationship with the consumer, so there’s no visibility into the interaction. In that instance, the developer shouldn’t have liability, argued Price. “Someone should have liability, but I don’t think both should have liability."

CTA, TechNet and Colorado Chamber Foundation were among the groups that opposed SB-4 at the hearing.

Andrew Wood, TechNet executive director for Colorado, said SB-4 “needs substantial amendments to be workable.” The definition for algorithmic decision system is “overly broad” because it includes anything that uses data to create an outcome. That definition and others need work, he said. The definition for deployer should be limited to entities that “design or materially modify an algorithmic decision systems and not those that merely sell or host them.”

The American Civil Liberties Union-Colorado, Electronic Frontier Foundation, Consumer Reports and TechEquity testified in support of SB-4.

Consumer Reports policy analyst Grace Gedye said entities using AI systems should be held responsible, and if a landlord, for example, can’t explain how the technology impacts algorithmic decisions, they couldn’t possibly know if they’re violating anti-discrimination laws. “This bill is reasonable and simple,” she said. “It’s the only bill this body will consider that actually addresses the issues surrounding AI decision-making.” Other consumer groups noted that SB-4 doesn’t include a private right of action, which business groups frequently oppose.

The focus of the AI law should be privacy and disclosure about the impacts of AI technology, said Rodriguez at the start of the hearing. The law’s original provision for a consumer to appeal, which, he said, caused a lot of friction, has been altered to give consumers a right to correct bad information. “That’s the core value of the policy, is disclosure and trust in the decisions that are being made,” he said. Rodriguez conceded some definitions need to continue to be tweaked and “whittled down.”