In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology.  On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote.  The final vote comes just days after the state Senate’s passage of the bill on May 3, making Colorado the first state in the nation to send comprehensive AI legislation to its governor for signing.  While Governor Jared Polis (D) has not indicated whether he will sign or veto the bill, if SB 205 becomes law, it would establish a broad regulatory regime for developers and deployers of “high-risk” AI systems. 

High-risk AI systems, as defined by the bill, are AI systems that make, or play a substantial part in making, consequential decisions that affect consumers.  SB 205’s duties and requirements would aim to minimize risks of algorithmic discrimination, or differential treatment or impacts that disfavor individuals or groups based on protected classifications, resulting from the use of high-risk AI systems.

Algorithmic Discrimination Duty of Care.  SB 205 would impose a duty of reasonable care on developers and deployers of high-risk AI to protect consumers from algorithmic discrimination.  The bill, which would be exclusively enforced by the Colorado Attorney General, would also establish a rebuttable presumption that high-risk AI developers and deployers meet this duty to use reasonable care if they comply with the bill’s requirements.

AI Interaction Notices & Public Disclosures.  SB 205 would require entities that deploy, sell, or otherwise make available an AI system that is “intended to interact with consumers” to disclose to consumers that they are interacting with an AI system, unless obvious to a reasonable person.  The bill would also require all AI developers and deployers to issue public statements disclosing the types of high-risk AI systems they develop, modify, or deploy and how they manage algorithmic discrimination risks, with updates within 90 days after modifying any high-risk AI. 

High-Risk AI Developer Requirements.  High-risk AI developers would be required to disclose to deployers information related to harmful or inappropriate uses, training data and data governance measures, performance evaluations, algorithmic discrimination safeguards, and other aspects of high-risk AI systems, along with any other information required to conduct impact assessments or monitor a high-risk AI system’s performance for risks of algorithmic discrimination.  High-risk AI developers would also be required to disclose, to the Colorado Attorney General and all known deployers and developers of a high-risk AI system, any known or foreseeable risk of algorithmic discrimination arising from the high-risk AI system’s intended uses within 90 days after discovering that such algorithmic discrimination occurred.

High-Risk AI Deployer Requirements.  SB 205 would require high-risk AI deployers to implement a “risk management policy and program” for mitigating algorithmic discrimination, which must be regularly updated over a high-risk AI system’s life cycle and must be reasonable considering the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework or equivalent risk management frameworks.  High-risk AI deployers would also be required to conduct algorithmic discrimination impact assessments for each high-risk AI system in deployment and at least 90 days after such AI systems are substantially modified. 

Additionally, high-risk AI deployers would be required to notify consumers of the use of high-risk AI for consequential decisions that affect them, provide consumers with statements disclosing the high-risk AI system’s purposes, data, and components, and provide information regarding consumers’ rights to opt out of profiling for decisions with legal or similarly significant effects under the Colorado Privacy Act.  High-risk AI deployers would also be required to provide consumers with opportunities to (1) correct any incorrect personal data processed by the high-risk AI system and (2) appeal adverse consequential decisions arising from the use of a high-risk AI system, which must allow for human review if technically feasible.  Finally, high-risk AI deployers would also be obligated to disclose incidents of algorithmic discrimination to the Colorado Attorney General within 90 days of discovering the incident.

Comprehensive AI Bills in Perspective.  Colorado’s passage of SB 205 coincides with votes to advance comprehensive AI bills in two separate California legislative committees.  On April 23, the California Assembly Judiciary Committee voted 9-2 to pass AB 2930, a comprehensive AI bill that would regulate the use of automated decision tools.  Mirroring SB 205’s requirements for high-risk AI systems, AB 2930 would impose impact assessment, notice, and disclosure requirements on developers and deployers to mitigate algorithmic discrimination risks.  Also on April 23, the California Senate Government Organization Committee voted 11-0 to pass the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), followed by the Senate Appropriations Committee’s 7-2 vote in favor of that bill on May 6.  While Colorado’s SB 205 and California’s AB 2930 would regulate AI systems based on their use in consequential decision making and address risks of algorithmic discrimination, SB 1047 would regulate AI systems based on their technical capabilities and address risks to public safety.

We are closely monitoring these and related state AI developments as they unfold.  A more detailed summary of California SB 1047 is available here, a summary of key themes in other recent state AI bills is available here, and our overview of recent state synthetic media and generative AI legislation is available here. Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate…

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate, state government, and political campaigns, Matt develops comprehensive policy strategies that identify regulatory risks and position clients to shape policy outcomes.

Public Policy and Regulatory Strategy

Matt serves as a strategic advisor to Fortune 200 companies on emerging technology policy, including artificial intelligence regulation, connected and autonomous vehicles, semiconductors, IoT, and national security matters. He translates complex legal and technical issues into actionable legislative and regulatory strategy, building the policy frameworks and advocacy infrastructure that enable clients to influence policy. He develops policy collateral for federal, state, and international advocacy, coordinates multi-stakeholder coalitions, and represents clients before Congress, federal agencies, and state legislative and regulatory bodies.

His technology policy experience includes securing unprecedented Presidential intervention in the $118 billion Qualcomm-Broadcom transaction (for which Covington was recognized as The American Lawyer 2019 “Dealmakers of the Year”), advising Fortune 200 companies on Bureau of Industry and Security connected vehicle rules, and counseling major internet platforms on autonomous vehicle policy across dozens of states.

Matt leads Covington’s state public policy practice, managing complex multistate legislative and regulatory advocacy campaigns. His state-level work includes securing a last-minute amendment to California’s 2023 money transmitter legislation on behalf of a fintech client and representing major technology companies on state AI, autonomous vehicle, and political advertising compliance matters across dozens of jurisdictions.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration under Chairwoman Amy Klobuchar (D-MN), where he negotiated the landmark bipartisan Electoral Count Reform Act – legislation that updated presidential election certification procedures for the first time in nearly 140 years. He also oversaw the Committee’s bipartisan January 6th investigation, developing protocols that resulted in unanimous passage of new Capitol security legislation.

Both in Congress and at Covington, Matt has prepared dozens of corporate executives, nonprofit leaders, academics, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter and strategist who has composed dozens of bills and amendments introduced in Congress and state legislatures, including many that have been enacted into law.

Election and Political Law Compliance and Enforcement

As a member of Covington’s Chambers-ranked (Band 1) Election and Political Law practice, Matt advises businesses, nonprofits, political committees, candidates, and donors on the full range of federal and state political law compliance matters, including:

Election and campaign finance laws
Lobbying disclosure
Government ethics rules
The SEC Pay-to-Play Rule

He also conducts political law due diligence for M&A transactions, counsels major political funders and donors in compliance and enforcement matters, and represents candidates, ballot measure committees, and donors in election disputes and recounts.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.