Privacy Daily is a service of Warren Communications News.
'So Much at Stake'

Labor and Industry Battle Over Mass. Bills Regulating AI

Regulating AI should center on limiting the technology's potential risks, labor representatives and other advocates said during a session of the Massachusetts Joint Committee on Advanced Information Technology, the Internet and Cybersecurity. At a hearing Thursday, they said their goal includes protecting state residents from AI's possible harms while also letting them reap its benefits.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

This equilibrium is "essential," which is why "all voices" should be heard on these questions, said Rep. Tricia Farley-Bouvier (D), one of the committee chairs. “We have so much at stake.”

During the session, lawmakers heard support for S-35, cross-filed as H-77 and also known as the Fostering Artificial Intelligence Responsibility (FAIR) Act. S-35 is sponsored by Sen. Dylan Fernandes (D). Farley-Bouvier sponsored H-77.

Massachusetts AFL-CIO President Chrissy Lynch said, “We need a framework now that protects workers and Massachusetts residents from having their privacy, their data and their jobs compromised by this rapidly evolving and expanding technology,” and the FAIR Act starts this conversation. It “includes strong worker-data privacy provisions" that limit "employer data collection to only essential job functions,” and establishes notice requirements, she said.

Responding to questions about whether AI regulations will slow innovation, Lynch said that “before the [1935] National Labor Relations Act,” opponents said child safety and overtime laws and the weekend "would crush business.” She added, “I don't buy that.”

“There are ways that we can lean in on the good parts of this technology and make them accessible and equitable for all people, but only if we limit or mitigate the potential risk, which is vast and … impossible to even really wrap our heads around,” Lynch said.

Other labor executives who testified in favor of the FAIR Act represented 1,500 workers from six employers along the North Shore of Boston. Companies “have rapidly and without notification, implemented AI systems to cut costs and pad profits in shortsighted ways that hurt workers, customers and communities,” said Adam Kaszynski, president of IUE-CWA Local 201.

“We need the state to step in, intervene and regulate these companies” to ensure AI does positive things “and not these things, like denying our claims, constantly keeping us out of work, and keeping us from getting wage replacement when we need it,” he added.

Crystal Weise, policy and program director for the AFL-CIO Technology Institute, said that “innovation should never be promoted at the expense of worker safety or worker well-being,” despite the tech industry’s attempts to discuss AI-first policies. The Center for Democracy and Technology’s Matt Scherer, who leads the workers’ rights project at the nonprofit, also supported the FAIR Act's worker protections.

The committee also received testimony on H-94, which aims to promote accountability and transparency in AI. “We are, in some way, the deployers [of AI], as lawmakers,” said Rep. Francisco Paulino (D), who sponsored the bill. “These guidelines of social policy and accountability … give us the information we need to make decisions about how these developers should put guidelines” around AI so that we “don't sacrifice what we call … the human experience.”

John Weaver, an AI attorney with McLane Middleton, encouraged lawmakers to add a private right of action to H-94. “Private rights of action allow governments to crowdsource enforcement,” he said. “A law that requires government action for enforcement has a limited number of police,” while “a law with a private right of action has far greater policing power.”

A parent who lost a child due to social media harms spoke in support of S-51, which, like H-94, emphasizes algorithm accountability and transparency. “Massachusetts has the opportunity to lead the nation with this legislation by establishing clear reporting requirements, independent audits and strong enforcement mechanisms,” the parent said. Moreover, it can create “a model that other states and even Congress can follow.”

“Most importantly, it puts the health and well-being of children before the profit motives of big tech companies,” she added.

Nancy Costello, law professor at Michigan State University, also supported the measure. S-51 “has been crafted very narrowly to withstand First Amendment challenges" and those from Section 230 of the Communications Decency Act, she said. “The Act does not shut down speech or require social media companies to change their content,” but “measures and reports the harm caused by the engagement-based algorithms.”

However, Christopher Stark, executive director of the Massachusetts Insurance Federation, had concerns about all the bills. For example, he worried that the definition of AI was too broad in H-94. Similarly, the definition of covered platform in S-51 is also too broad, Stark said. In the FAIR Act, he asked that “risk mitigation tools” be exempt.

Chamber of Progress has a general wariness for bills like those presented during the hearing, said Brianna January, northeast state & local government relations director. “We respectfully urge the committee to not buy into narratives of science fiction and the Terminator as referenced earlier and overregulate to the point of hindering innovation.”

“We support any effort to prevent bias, whether it's based on human decision-making or algorithms,” she added. “That said, it’s more than likely … that Massachusetts' existing anti-discrimination laws would cover, in practice, algorithms making decisions.” For instances of algorithmic loopholes, a sector-by-sector approach is needed, she said.