Privacy Daily is a service of Warren Communications News.

Civil Rights Group Releases Guide for Fair and Trusted AI Use

The Leadership Conference’s Center for Civil Rights and Technology on Thursday released a guide for companies to help ensure the AI systems within their organization are fair, safe and trusted, while protecting and promoting civil rights especially of marginalized groups.

Sign up for a free preview to unlock the rest of this article

The conference said its Innovation Framework is a vision for the development and use of AI products, tools and services, built through collaboration with various stakeholders.

The guide includes four foundational values for managing decisions and business strategy: (1) civil and human rights by design; (2) AI as a tool, not a solution; (3) sustainable innovation; and (4) humans are integral to AI. These values are based on 10 pillars throughout the life cycle of AI, including identifying appropriate AI use cases, assessing for bias and discrimination and protecting sensitive data, the guide said.

"AI can help or harm real people," said Maya Wiley, the group's president, in a press release. "When companies refuse to ensure that their AI innovates rather than discriminates, it can mean more expensive and worse health care, more denials of home loans to working families who deserve them, and more qualified candidates getting turned away from good jobs." Wiley said companies "can build trustworthy and better AI if they commit to a framework that truly innovates for all.”

"Private industry doesn’t have to wait on Congress or the White House to catch up," added Kostubh Bagchi, vice president of the group's Center for Civil Rights and Technology.