Privacy Daily is a service of Warren Communications News.
Too Soon?

Connecticut 'Can't Wait for the Feds' on AI Regulation, Say Senate Leaders

The Connecticut Senate will vote on an AI bill by Sen. James Maroney (D) this year, as it did last year, declared President Pro Tempore Martin Looney (D) at a press conference ahead of a Wednesday hearing on SB-2. While Looney said passage of the bill is urgent, Connecticut's chief innovation officer told a hearing the state risks regulating too soon and getting it wrong.

Sign up for a free preview to unlock the rest of this article

Like having a public health code, "this is one of those times when we can't wait for the feds,” Looney said at the news conference. Majority Leader Bob Duff (D) agreed with Looney that the federal government is unlikely to act. Apparently referring to Elon Musk, Duff added that Americans are scared because a "tech baron who is now co-president of the United States … is probably looking to scrape the data of the federal government for his own purposes."

“To say you want a federal solution is to say you want no solution,” said Maroney, author of Connecticut’s comprehensive privacy law. The last major federal privacy bill was the 1998 Children's Online Privacy Protection Act, which predated social media by about a decade, he noted. The state's proposed AI law is intended to dovetail with the Connecticut Data Privacy Act, Maroney told Privacy Daily last month (see 2412200047).

Maroney’s previous AI bill passed the Senate last year but Gov. Ned Lamont (D) raised concerns and the House never voted on it. Lamont, who this year recommended another Connecticut AI bill (SB-1249), has signaled he wants to work with the legislature on protections against deepfakes and discrimination, said Maroney.

SB-2 would specifically address concerns about using automated decision-making technology (ADMT) in hiring, housing and other important decisions, Maroney said. The state applied similar requirements to government usage of ADMT in 2023, he noted.

SB-2 seeks guardrails on ADMT including by requiring transparency, so that people "know if AI is being used to make an important decision about your life,” he said. And it would require impact assessments to make sure AI is fair and accurate, Maroney added. Moreover, the bill would criminalize deepfake revenge porn and promote workforce training.

Compared to the AI bill that failed last year, SB-2 has adjusted definitions and a narrower scope, said Maroney, who received input from Colorado, which enacted the nation’s first AI law. SB-2 has minor differences from the Virginia AI bill that passed the legislature this month (see 2502200003), said Maroney. The Virginia bill still needs gubernatorial approval. Legislatures in Rhode Island, New York and Massachusetts also have similar bills awaiting votes, said Maroney. Duff said that similarity among the various states’ AI bills means there won’t be a patchwork.

But afterward at a General Law Committee hearing, Connecticut Chief Innovation Officer Daniel O’Keefe suggested scrapping SB-2’s “regulatory components” while keeping language promoting economic development. Applying "new and complex statutory requirements" to AI creates risk and may chill private sector investment due to higher compliance costs, said O’Keefe, who prefers Lamont’s approach in SB-1249. “Rather than being a state that welcomes and supports innovation, we become the only state in the region that resists it."

Besides, Connecticut already has broader non-discrimination rules that should cover AI, said O’Keefe, who is also commissioner at the state’s Department of Economic and Community Development. "AI does not make the illegal, legal." O’Keefe said he's not “anti-regulation,” but fears government is intervening too soon. “My concern right now is we're attempting to regulate fear of the unknown."

“We're actually looking at regulating known risks,” replied Maroney, the committee’s Senate chair. Legislators mustn’t abdicate their responsibility, he added.

Later in the hearing, House Chair Roland Lemar (D) asked Looney to respond to O’Keefe’s argument that lawmakers shouldn’t regulate too soon.

"You have to look at how important is the entity you are considering regulating, and does it have potential dangers as well as potential benefits?” responded the Senate president pro tempore. “And I would argue that the potential dangers are so high here, it argues in favor of as early [a] regulation as we can.”

Lemar agreed. "Regulation can be used as a boogeyman ... to encourage state governments to do nothing."

In other testimony at the hearing, TechNet Executive Director-Northeast Chris Gilrein urged that legislators focus on "known risks ... and not get into hypothetical scenarios with terms like ‘reasonably foreseeable.’” Avoid imposing burdensome requirements on a “nascent industry,” or overlapping with rights consumers already have under the state’s comprehensive privacy law, he said.

Later, Maroney pressed State Privacy & Security Coalition attorney William Martinez about industry’s concern that requiring AI impact assessments annually would be too frequent. Martinez said companies should only have to redo impact assessments when they have substantially modified AI models. But Maroney said he’s not convinced that’s often enough, since AI models change as they "ingest more data" and learn.

"This is not importing a European model," said Maroney in response to a claim to the contrary by Adam Thierer, R Street senior fellow. Thierer said he agreed with O'Keefe's concerns about regulating out of fear. But Maroney stressed "it's not about fears of the unknown. We are looking at actual harms that have happened."

The AI bill has several important parts, including rights to receive a pre-use notice and post-decision explanation, to correct personal data used in a decision and to appeal the decision, said Grace Gedye, Consumer Reports (CR) policy analyst. “It also has some provisions that must be strengthened, and loopholes in liability shields that we worry will allow companies to escape responsibility.”

Privacy Law Updates

The committee also took testimony on another Maroney bill (SB-1356) to update the state’s comprehensive privacy law, including by tightening exemptions and adding data minimization rules (see 2412300043).

Electronic Privacy Information Center Deputy Director Caitriona Fitzgerald supported the proposed changes. “There have been some lessons learned and some new developments since the law was originally enacted,” Fitzgerald said. “The provision that is most obviously in need of a fix is the limit on data collection.” She supported updating it to the level of Maryland’s data minimization standard.

SB-1356 makes “several commonsense changes to raise the standard of consumer privacy protection in the state,” addressing issues that consumer privacy advocates and the attorney general’s office raised, testified Matt Schwartz, CR policy analyst. Like Fitzgerald, Schwartz especially praised the bill’s proposed data minimization standard. However, Connecticut should go further than what’s been proposed by banning sensitive data sale outright, he said.

In addition to data minimization, the bill would improve the current privacy law by broadening whom it covers and what counts as sensitive data, said Eric Null, a co-director at the Center for Democracy and Technology.

However, State Privacy & Security Coalition counsel Andrew Kingman objected to proposed changes on data minimization and other things he said would misalign Connecticut with other states. “We do not see a reason to depart from the current data minimization framework,” which is also used in Europe, California and all other U.S. states except Maryland, he said. “Moving to the current proposed language could restrict access to updates for current products, patches for security” and “may result in more click-throughs for consumers.”