Privacy Daily is a service of Warren Communications News.
Much 'Can Go Wrong'

ACLU: Racial Justice Must Be Central Concern for AI Regulatory Framework

Racial justice must be at the center of AI-related regulations to prevent discrimination and the potential creation of a surveillance state, American Civil Liberties Union (ACLU) officials said Thursday during a panel.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

ReNika Moore, director of the ACLU’s Racial Justice Program, said AI is particularly prominent in the early pipeline stages of the hiring process. For example, it's used to target potential candidates, matching them with jobs and ranking or rejecting them during initial screening, she said. “We know from prior research and from reporting from clients ... that they are facing discrimination [and/or] ... disadvantage because of what's happening in the pipeline,” she added.

While people sometimes know they are communicating with AI during the application process, “what people often don't know ... is how those answers are being transferred to a human leader in the process,” she said. The information is often summarized and "potentially analyzed, and recommendations may be given based on" attributes like facial expressions or the pitch or tone of a voice.

But “when [job applicants] know that a system is being used, it allows them to better assert their rights" and "potentially ask … what's legally available to them in the form of ... a reasonable accommodation,” Moore continued.

Marissa Gerchick, data science manager and algorithmic justice specialist with ACLU Technology, said AI must be used carefully in areas where discrimination is present, such as law enforcement. Nathan Freed Wessler, deputy director of the ACLU Speech, Privacy & Technology Project, said the widespread use of facial recognition technology in law enforcement is an example of that.

“We often say that this is a technology that's dangerous when it works and dangerous when it doesn't,” said Wessler. “Even a perfectly functioning biometric surveillance tool raises a lot of really scary questions and can really just turn into a tool of oppression.”

"There are vast demographic disparities in the false-match rates that a lot of research from the federal government, from independent researchers, has reinforced.” Accordingly, it's not a surprise to see "wrongful arrest of Black people after police have relied on what turned out to be incorrect results of this technology.”

Wessler said “the real nightmare scenario” is tracking on live or recorded video, which erases "our ability to go about our lives without being pervasively identified and tracked by the government.”

Cody Venzke, ACLU senior policy counsel for surveillance, privacy and technology, said this team's "ultimate goal ... is passing legislation at the federal level that provides a civil rights and civil liberties baseline for all people ... and then, of course, permitting states to build on that.” Such a framework would apply to privacy and AI, "whether that be decision-making in protected areas of life … or in certain surveillance technologies like facial recognition technology.”

Venzke noted the states recognize that "AI is being used to make decisions about our access to economic opportunity, housing, education, employment, credit and more.” Colorado’s first-in-the-nation AI discrimination law, from 2024 (see 2505300046), is an example of this, he added.

Gerchick said “legislation in this area can be a battleground of what we're even talking about when we use the term AI, and how important it is to be really clear about what that means and what it includes and doesn't include.” She also said it was important to understand the difference between newer AI, like ChatGPT or other generative AI, and the less complex AI systems that already existed and were in use in high-stakes areas like surveillance or job hunting.

Gerchick said that "one of the really important points ... is the idea of how much people know about how ... data is being collected about their lives, how they're included in ... datasets, and how they're excluded from these datasets” that train AI models.

Which is why, Venzke said, transparency and notice are key aspects of an effective policy. “Often it can be hard to know exactly how that discrimination has taken place, or that discrimination has potentially taken place, without some sort of notice, without transparency, and that leads you then to sort of guess and file a lawsuit and hope you discover something during the course of discovery."

This panel occurred the same day that the Massachusetts attorney general announced a settlement with student loan lender Earnest Operations for consumer protection violations, including the use of AI in decisions that led to unfair and discriminatory outcomes (see 2507100041).