Privacy Daily is a service of Warren Communications News.
A 2025 Priority

Connecticut AI Bill Dovetails with Privacy Law, Says State Senator

A reintroduced Connecticut AI bill aims to build on the state’s 2022 comprehensive privacy law, state Sen. James Maroney (D), the privacy law’s author, said in an interview. Maroney’s second attempt at establishing AI requirements will be a priority bill for majority Democrats in the Connecticut Senate next year, Maroney, Senate President Pro Tempore Martin Looney and Majority Leader Bob Duff said in a joint announcement last month.

Sign up for a free preview to unlock the rest of this article

Maroney filed SB-2 with placeholder text on Tuesday. The lawmaker told us last month that he planned to publicize a draft AI bill for feedback in early January. With the session starting Wednesday, the senator expects a vote to draft the formal bill by Jan. 17. And he predicted a hearing on the bill in late January or early February. Maroney is preparing a separate bill that updates the state's comprehensive privacy law (see 2412300043).

The AI bill will focus on transparency, accountability, workforce training and criminalization of nonconsensual intimate deepfake images, the three Democrats said last month. “Without regulation, AI poses risks such as bias, privacy violations, and unforeseen societal impacts,” said Duff. “We must be proactive so AI does not negatively impact us before it is too late.”

The Connecticut Senate approved a similar Maroney AI bill (SB-2) last year, though it stalled in the House. In the meantime, Colorado became the first state to enact a broad AI bill. Looney said Connecticut should be next. “Connecticut needs to require guidelines to ensure decisions are made fairly, accurately, and transparently,” he said. “Without these regulations, the technology could outpace our ability to manage its risks, creating unintended consequences for our state.”

Regulation is needed to “mitigate the downside” of AI, Maroney told us last month. He cited recent surveys by Heartland Forward and Pew Research Center showing that many people are concerned and want government regulation. “It’s important that we put in some guardrails and safeguards now so that people can feel safe and companies know clearly what they can do.” It’s states’ responsibility “because we’ve seen ... that the federal government hasn’t acted” on issues such as privacy. In addition, social media “got away from us” due to a lack of rules. “We don’t want to make the same mistake” with AI.

“The foundation of AI regulation should be strong data privacy regulations,” he said. One important way the AI bill will interplay with the state privacy law “is if a decision is being made about you, and they’re using data to make that decision, we already [gave] you the right to access data that was collected about you in our data privacy law,” he said. “We just want to go a step further [to say] that if you’re making a decision,” a business should “let people know what the decision was based on.” The policies also dovetail because the privacy law gave consumers the right to correct data that could lead to an incorrect AI-based decision, he said. “These are already existing rights, so just tying them into these decisions is important.”

Maroney said he's trying to figure out how the AI bill should deal with the privacy law's right to delete data collected on consumers. “That is interesting … because if data has been collected about you [and] been used to train a system … it can’t unsee that data unless you completely retrain the system.”

The 2025 AI bill will be “very similar” to the 2024 measure, though it omits a section on election deepfakes. Also, it narrows some terms and will cover integrators, not just deployers and developers as before, he said. Maroney is “hopeful” that the bill fares better this year given that passing it won’t make Connecticut the first state with an AI law. Roughly one dozen other states are likely to have similar bills, too, Maroney said. He added that Connecticut has now had three years of stakeholder talks.

Meanwhile, in California, state Sen. Scott Wiener (D) introduced placeholder text for a bill (SB-53) that appears to be a reintroduction of a controversial 2024 AI frontiers bill that Gov. Gavin Newsom (D) vetoed (see 2409300011 and 2409060039). Last year's bill would have required large AI developers and those providing the computing power behind AI model training to implement protections preventing critical harms. Also, New York state introduced a comprehensive AI bill earlier this week (see 2501070076).