Colo. AI Task Force Seeks More Time on Thorny Issues
Following months of discussion about revising Colorado’s first-in-the-nation AI discrimination law, a legislative task force recommended more meetings in a report released Monday.
Sign up for a free preview to unlock the rest of this article
The Artificial Intelligence Impact Task Force “recommends that discussions among policymakers and stakeholders continue in the coming weeks and months to achieve consensus and agreement where possible on changes to the law,” the report said.
“It is the task force’s hope that such continued engagement will lead not only to agreement on issues that appear ripe for consensus and compromise but also on some of the most contentious issues.” The group met six times since August last year.
Colorado’s landmark AI bill passed last year and takes effect in 2026. However, a legislative task force was created to recommend tweaks by early February. Colorado Attorney General Phil Weiser (D) told us last month that his office would hold a rulemaking after lawmakers make any changes to the AI law this year (see 2412160042).
At a Dec. 20 hearing, tech industry groups asked that the AI Impact Task Force make it easier for companies to comply with the 2024 AI law. At the same time, multiple consumer privacy advocates said they wanted to modify the law by adding civil rights protections and a private right of action, increasing transparency and cutting exemptions.
The task force said its discussions “identified a number of potential areas where the law could be clarified, refined, and otherwise improved.” Also, “while there are distinct differences on some issues among stakeholders, particularly between representatives of industry groups and public interest groups, there are also many issues on which consensus or mutually acceptable compromise is achievable.”
The task force listed several examples of issues where it said finding consensus seemed possible given more time: (1) “More specifically defining the types of decisions that qualify as ‘consequential decisions’ under the law to ensure greater clarity for those who may be subject to the law’s requirements without excluding applications that significantly impact consumers and workers.” (2) “Reworking the lists of exemptions to the definition of covered decision systems and from the obligations that the law imposes on developers and deployers to address both industry and public interest concerns.” (3) Proposed changes to the provisions governing the information and documentation that developers provide deployers. (4) “Proposed changes to the timing of and triggering events for impact assessments.”
However, it could be more difficult to find consensus where changes would require editing in multiple interconnected sections of the law, the task force said. One example is changing the definition of “algorithmic discrimination,” a topic “where industry and public interest groups have strong disagreement on what obligations developers and deployers should have in preventing algorithmic discrimination and how those obligations are enforced.”
“Creativity” may be needed to overcome “firm disagreements” on other even harder issues, the report said. Examples include: (1) the “definition of ‘substantial factor,’ which helps define the scope of AI technologies that will be subject to the law” and the “definition and mechanics of the ‘duty of care’ for developers and deployers, or whether even to continue to include the concept of a duty of care or to replace it with more or less stringent obligations.”
Other thorny issues include whether “to retain, minimize, or expand the small business exemption that currently exempts deployers with fewer than 50 employees from certain requirements in the law,” whether to include a right to cure possible violations and possible changes to trade-secret exemptions, consumer right to appeal and AG rulemaking scope and the law’s effective date.