On March 17, Colorado Governor Jared Polis released a draft bill that would substantially overhaul the Colorado AI Act, replacing its core requirements with a narrower regime focused on disclosure, recordkeeping, and consumer notice requirements for “automated decision-making technology” (“ADMT”).  The proposal, which is still in draft form and not yet introduced in the General Assembly, marks the latest stage in a multi-year effort to reform the Colorado AI Act ahead of its June 30, 2026, effective date.

As enacted in May 2024, the current Colorado AI Act imposes separate, overlapping requirements for developers and deployers of “high-risk AI systems” that are used to make “consequential decisions” about consumers, including a duty of reasonable care for developers and deployers to protect consumers from “algorithmic discrimination.”  The draft bill would abandon that framework, repealing the duty of care and abandoning the incident reporting, impact assessment, and risk management requirements.  Instead, the draft bill favors a lighter-touch approach focused on transparency and consumer privacy, with narrower disclosure obligations for consumer-facing entities.  The proposal, if adopted, would more closely align with the approach taken by the California Privacy Protection Agency in its California Consumer Privacy Act (CCPA) ADMT Regulations.

Covered ADMT.  The draft bill would apply solely to “covered ADMT,” or ADMT “used to materially influence a consequential decision.”  While the draft bill’s definitions are arguably broader than their Colorado AI Act equivalents, the draft bill also contains new exceptions that would narrow its scope.

  • ADMT.  Similar to the CCPA ADMT Regulations, the draft bill defines “ADMT” broadly as “any technology that processes personal information and uses computation” to make or assist in decisions about individuals.  Unlike the Colorado AI Act’s “AI system” definition, however, the draft bill’s “ADMT” definition would exempt certain summarization tools, chatbots subject to acceptable use policies, and other technologies that do not rely on machine learning, regardless of whether the technologies are used to make consequential decisions “when deployed.”
  • Consequential Decision.  The draft bill expands this term to refer to a “decision, determination, or action for a consumer, employee, or applicant,” including a decision or action that “materially affects” the provision of, or “materially influences” the terms of, a covered opportunity or service.  The draft bill also would provide several exceptions to this term not found in the Colorado AI Act, including exceptions for “low-stakes or routine decisions, actions, and business processes” and “advertising, marketing, differentiated product recommendations, search, or content moderation.”
  • Material Influence.  The draft bill defines “materially influences” as an “ADMT output” that (1) is a “non-de minimis factor that is used in making a consequential decision” and (2) “affects the outcome of the decision.”  Unlike the Colorado AI Act’s “substantial factor” test, the draft bill’s “materially influences” definition would exclude “incidental, trivial or clerical uses” of ADMT.

Covered Entities.  The draft bill would define “developer” as any person that (1) “develops, offers, sells, leases, licenses, or otherwise makes commercially available a covered ADMT”; (2) “develops a component” for use as part of a covered ADMT in consequential decisions; or (3) “intentionally and substantially modifies an ADMT such that it becomes a covered ADMT.”  At the same time, the draft bill adds exemptions not found in the Colorado AI Act, including for developers of ADMT solely for internal purposes, and generally limits developer liability for downstream uses of ADMT “not intended, documented, marketed, advertised, configured, or contracted for by the developer.”  The draft bill also expands the Colorado AI Act’s definition of “consumer” to expressly include employees and job applicants.

Developer Obligations.  The draft bill would significantly narrow developer obligations compared to the current Colorado AI Act, applying only where a developer markets, provides, or intends for ADMT to be used in consequential decisions.  Covered developers would be required to (1) provide deployers with information regarding intended uses, limitations, and training data; (2) notify deployers of material updates, substantial modifications, and changes to that information; and (3) retain records demonstrating compliance for at least three years.

Deployer Obligations.  Covered deployers would be subject to a similar three-year record retention requirement, starting from the “date of the consequential decision.”  The draft bill’s other deployer obligations focus solely on consumer-facing notices and disclosures, while removing the Colorado AI Act’s deployer impact assessment and incident reporting requirements:

  • Point-of-Interaction Notice or Public Posting.  Covered ADMT deployers would be required to provide clear and conspicuous notice to consumers that the deployer “uses covered ADMTs for consequential decisions” and how to obtain additional information.  Alternatively, deployers may satisfy this requirement by maintaining a “prominent public notice” at points of consumer interaction.
  • Post-Adverse Decision Disclosure. If a deployer’s use of ADMT for a consequential decision results in an “adverse outcome” for a consumer, the deployer would be required to provide the consumer with information about the consequential decision, the role of ADMT, and how the consumer can request additional information, request and correct their personal data, and request meaningful human review or reconsideration.

Enforcement.  As under the Colorado AI Act, the proposal would grant the Colorado Attorney General exclusive enforcement authority pursuant to Colorado’s consumer protection statute.  Unlike the existing law, however, the draft bill includes a 90-day cure period, limits the Act’s exemption for creditors to apply only to entities that provide an equivalent notice under federal financial regulations, limits the Act’s existing exemption for HIPAA covered entities to apply only to covered entities that provide notices to patients of the use of ADMT, and allocates liability between developers and deployers “based on their relative fault for the violation.”   It also would prohibit indemnification provisions between developers and deployers related to uses of ADMT that violate Colorado’s antidiscrimination laws.

The draft bill’s future is far from certain.  Upon signing the Colorado AI Act in May 2024, Governor Polis called on the General Assembly to amend the Act; in June 2024, Governor Polis and other state officials announced a “process to revise” the Act to harmonize its definitions with federal and state frameworks and reduce compliance burdens.  Despite these concerted efforts, however, Colorado lawmakers failed to pass significant amendments to the Act in the 2025 legislative session, and instead opted to extend the Act’s effective date to June 30, 2026, to consider further amendments this year.  The ongoing effort to preempt “onerous” state AI laws under President Trump’s AI Preemption Executive Order, which expressly criticized the Colorado AI Act for requiring “ideological bias within models,” could lead Colorado lawmakers to push the draft bill – or other substantive amendments to the Act – over the finish line in 2026.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

Photo of Irene Kim Irene Kim

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal…

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal AI legislation, comprehensive state privacy laws, and regulatory compliance matters.