With most state legislative sessions across the country adjourned or winding down without enacting significant artificial intelligence legislation, Colorado and California continue their steady drive to adopt comprehensive legislation regulating the development and deployment of AI systems. 

Colorado

Although Colorado’s AI law (SB 205), which Governor Jared Polis (D) signed into law in May, does not take effect until February 1, 2026, lawmakers have already begun a process for refining the nation’s first comprehensive AI law.  As we described here, the new law will require developers and deployers of “high-risk” AI systems to comply with certain requirements in order to mitigate risks of algorithmic discrimination. 

On June 13, Governor Polis, Attorney General Phil Weiser (D), and Senate Majority Leader Robert Rodriguez (D) issued a public letter announcing a “process to revise” the new law before it even takes effect, and “minimize unintended consequences associated with its implementation.”  The revision process will address concerns that the high cost of compliance will adversely affect “home grown businesses” in Colorado, including through “barriers to growth and product development, job losses, and a diminished capacity to raise capital.”

The letter proposes “a handful of specific areas” for revision, including:

  • Refining SB 205’s definition of AI systems to focus on “the most high-risk systems” in order to align with federal measures and frameworks in states with substantial technology sectors.  This goal aligns with the officials’ call for “harmony across any regulatory framework adopted by states” to “limit the burden associated with a multi-state compliance scheme that deters investment and hamstrings small technology firms.”  The officials add that they “remain open to delays in the implementation” of the new law “to ensure such harmonization.”  
  • Narrowing SB 205’s requirements to focus on developers of high-risk systems and avoid regulating “small companies that may deploy AI within third-party software that they use in the ordinary course of business.”  This goal addresses concerns of Colorado businesses that the new law could “inadvertently impose prohibitively high costs” on AI deployers.
  • Shifting from a “proactive disclosure regime” to a “traditional enforcement regime managed by the Attorney General investigating matters after the fact.”  This goal also focuses on protecting Colorado’s small businesses from prohibitively high costs that could deter investment and hamper Colorado’s technology sector.

The process is designed to “complement” the work of the AI Impact Task Force established by HB 1468, which was signed into law on June 6.  The Task Force is charged with recommending definitions, requirements, codes, benchmarks, and best practices related to algorithmic discrimination and AI systems.  The Task Force includes Attorney General Weiser, whose office is granted rulemaking authority under SB 205.

The letter also follows Governor Polis’s May 17 signing statement, which expressed concerns about the “impact this law may have on an industry that is fueling critical technological advancements” and encouraged Colorado lawmakers to “work closely with stakeholders” to “amend this bill to conform with evidence based findings and recommendations for the regulation of this industry.”

Although it is too early to forecast the outcome of the revision process for SB 205, the goals set out by policymakers could ultimately significantly scale back the law’s disclosure requirements that apply to entities that deploy AI systems.  At the same time, Colorado officials have not shown a willingness to ease requirements for AI developers, or to modify requirements that align with approaches taken by other states.  In their public letter, the Governor, AG, and legislative leadership have committed to “continued robust stakeholder feedback” throughout the revision process, which should give industry additional opportunities to weigh in on Colorado’s AI regulatory framework before SB 205 takes effect.

California

California lawmakers continue to advance dozens of AI bills that address a range of issues, from deceptive election deepfakes to potential “hazardous capabilities” of the most powerful AI models. 

Automated Decision Tools and Algorithmic Discrimination.  On May 21, AB 2930 passed the California Assembly on a 50-14-16 vote and was ordered to the Senate.  Similar to Colorado’s AI law, AB 2930 would impose impact assessment, notice, and disclosure requirements for developers and deployers of “automated decision tools” used to make “consequential decisions” for consumers.  If passed, the bill would come into effect on January 1, 2026, one month prior to the effective date of Colorado’s SB 205. 

The Safe & Secure Innovation for Frontier AI Models Act.  On May 21, the Safe & Secure Innovation for Frontier AI Models Act (SB 1047) passed the California Senate on a 32-1-7 vote and was ordered to the Assembly.  The bill would impose safety testing and incident reporting requirements on AI models that are trained on a quantity of computing power that is greater than 1026 flops and exceeds $100,000,000 in value.  SB 1047 would also require developers of covered models to implement various safeguards, including “kill switches” and cybersecurity protections.  We previously covered SB 1047 on our blog here.

Provenance, Authenticity and Watermarking Standards.  On May 22, AB 3211 passed the California Assembly on a 62-0-18 vote and was ordered to the Senate.  The bill would require that generative AI providers ensure that synthetic content produced or significantly modified by their generative AI systems contain “imperceptible and maximally indelible” watermarks.  The bill would also require that generative AI providers (1) conduct red-team testing to ensure that watermarks cannot be easily removed, (2) make publicly available “watermark decoders” that allow individuals to assess the provenance of AI-generated content, and (3) report material vulnerabilities or failures in generative AI systems related to the inclusion or removal of watermarks.  AB 3211 would also require “large online platforms” to label whether content on their platforms is synthetic or nonsynthetic and to detect and label synthetic content that lacks a watermark.  We previously summarized other states’ approaches to regulating synthetic content and generative AI here.

The Defending Democracy from Deepfake Deception Act of 2024.  On May 22, the Defending Democracy from Deepfake Deception Act (AB 2655) was passed by the California Assembly on a 56-1-23 vote and ordered to the Senate.  This bill would require large online platforms to detect materially deceptive content, including deepfakes and chatbots, on their platforms using state-of-the-art tools.  Large online platforms would also be required to block and prevent the posting or sending of materially deceptive content of candidates between 120 days before an election and election day or, if the content depicts elections officials, between 120 days before and 60 days after an election.  For materially deceptive content posted outside those time periods or that appears within advertisements or election communications, platforms must detect and label such content as inauthentic, fake, or false.

Given the recent progress on several AI bills, California lawmakers appear to be coalescing around core pillars of a potential comprehensive AI regulatory regime for developers, deployers, and online platforms.  Although it is not certain which bills, if any, will pass by the legislature’s scheduled adjournment on August 31, the breadth of pending AI legislation highlights potential key areas of focus for future legislative sessions: algorithmic discrimination, public safety, generative AI tools, and AI-generated election content online.  

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.