On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI.  The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.”  Governor Gavin Newsom (D) has until September 30 to sign or veto the bill. 

If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action.  In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety. 

Covered Models.  SB 1047 establishes a two-part definition of “covered models” subject to its safety and security requirements.  First, prior to January 1, 2027, covered models are defined as AI models trained using a quantity of computing power that is both greater 1026 floating-point operations per second (“FLOPS”) and valued at more than $100 million.  This computing threshold mirrors the AI EO’s threshold for dual-use foundation models subject to red-team testing and reporting requirements; the financial valuation threshold is designed to exclude models developed by small companies.  Similar to the Commerce Department’s discretion to adjust the AI EO’s computing threshold, California’s Government Operations Agency (“GovOps”) may adjust SB 1047’s computing threshold after January 1, 2027.  By contrast, GovOps may not adjust the valuation threshold, which is indexed to inflation and must be “reasonably assessed” by the developer “using the average market prices of cloud compute at the start of training.”

SB 1047 also applies to “covered model derivatives,” defined as(1) “fine-tuned” covered models; (2) modified and unmodified copies of covered models; and (3) copies of covered models combined with other software.  Prior to January 1, 2027, fine-tuned covered model derivatives must be fine-tuned using at least three times 1025 FLOPS of computing power worth more than $10 million.  After January 1, 2027, GovOps may adjust the computing threshold.

Critical Harms & AI Safety Incidents.  SB 1047 would require AI developers to report “AI safety incidents,” or specific events that increase the risk of critical harms, to the California Attorney General within 72 hours after discovery.  Critical harms are defined as mass casualties or at least $500 million in damages caused or materially enabled by a covered model that: (1) creates or uses chemical, biological, radiological, or nuclear (“CBRN”) weapons; (2) conducts or instructs cyberattack(s) on critical infrastructure; or (3) engages in unsupervised acts that would be criminal if done by a human.  Critical harms also include other grave harms to public safety and security of comparable severity.

“AI safety incidents” are defined as incidents that demonstrably increase the risk that critical harms will occur by means of the following: (1) a covered model autonomously engaging in behavior not requested by a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of a covered model’s model weights; (3) critical failures of technical or administrative controls; or (4) unauthorized uses of a covered model to cause or materially enable critical harms.

Pre-Training Developer Requirements.  SB 1047 would also impose requirements on developers prior to the start of training a covered model, including:

  • Administrative, Technical, & Physical Cybersecurity Protections.  Protections must be reasonably designed to prevent unauthorized access, misuse, or unsafe modifications.
  • Full Shutdown Capability.  Developers must implement the capability to promptly enact a full shutdown of each covered model.
  • Safety & Security Protocols.  Developers must implement protocols for managing risks across each covered model’s life cycle, including procedures for avoiding critical harms, compliance requirements that can be verified by third parties, testing for unreasonable risks of critical harms, and conditions for enacting a full shutdown, among other things.  Developers must designate senior personnel responsible for ensuring compliance and retain and disclose protocols to the public and the California Attorney General.

Pre-Deployment Developer Requirements.  SB 1047 would impose separate requirements for developers prior to using a covered model or making a covered model available for commercial or public use, including:

  • Critical Harm Assessments.  Developers must assess whether each covered model is reasonably capable of critical harms.  The tests and results in these assessments must be retained for as long as the model is available, plus five years.  If unreasonable risks of critical harm are found, developers may not use or provide the covered model.
  • Critical Harm Safeguards & Attribution.  Developers must take reasonable care to implement appropriate safeguards to prevent critical harms.  Additionally, developers must take reasonable care to ensure that each covered model’s actions and resulting critical harms can be accurately and reliably attributed to the covered model.

Ongoing Developer Requirements.  Finally, SB 1047 would require developers to annually reevaluate their policies, protections, and procedures, and impose other ongoing requirements:

  • Third-Party Audits.  Starting January 1, 2026, developers must annually retain third-party auditors to perform SB 1047 compliance audits.  Auditors must produce certified reports with compliance assessments, instances of noncompliance, recommendations to improve compliance, and assessments of developers’ internal controls.  Developers must retain, publish, and disclose these reports to the California Attorney General.
  • Compliance Statements.  Within 30 days after using or making a covered model available and annually thereafter, developers must submit statements of SB 1047 compliance to the California Attorney General, including assessments of potential critical harms and assessments of the sufficiency of safety and security protocols.
  • AI Incident Reporting.  As mentioned above, developers must report AI safety incidents affecting covered models to the California Attorney General within 72 hours after learning of the incident or developing a reasonable belief that an incident occurred.
  • Whistleblower Protections, Notice, & Reporting.  Developers are prohibited from retaliating against employees who disclose information to the California Attorney General indicating noncompliance or unreasonable critical harm risks.  Developers must notify employees of their rights and responsibilities under SB 1047 and provide internal processes for anonymously disclosing information on noncompliance to the developer.

Future Regulations and Guidance.  SB 1047 requires GovOps to issue, by January 1, 2027, new regulations on the computational thresholds for covered models and auditing requirements for third-party auditors, in addition to guidance for preventing unreasonable risks of critical harms.  The regulations and guidance must be approved by the “Board of Frontier Models,” a nine-member group of AI and safety experts established by SB 1047. 

SB 1047 is just one of over a dozen AI bills passed by the California legislature last month covering a range of AI-related topics including election deepfakes, generative AI content and training data, and digital replicas.  The passage of SB 1047 also comes as Colorado lawmakers embark on a revision process for SB 205, as we have covered here

*                      *                      *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate…

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate, state government, and political campaigns, Matt develops comprehensive policy strategies that identify regulatory risks and position clients to shape policy outcomes.

Public Policy and Regulatory Strategy

Matt serves as a strategic advisor to Fortune 200 companies on emerging technology policy, including artificial intelligence regulation, connected and autonomous vehicles, semiconductors, IoT, and national security matters. He translates complex legal and technical issues into actionable legislative and regulatory strategy, building the policy frameworks and advocacy infrastructure that enable clients to influence policy. He develops policy collateral for federal, state, and international advocacy, coordinates multi-stakeholder coalitions, and represents clients before Congress, federal agencies, and state legislative and regulatory bodies.

His technology policy experience includes securing unprecedented Presidential intervention in the $118 billion Qualcomm-Broadcom transaction (for which Covington was recognized as The American Lawyer 2019 “Dealmakers of the Year”), advising Fortune 200 companies on Bureau of Industry and Security connected vehicle rules, and counseling major internet platforms on autonomous vehicle policy across dozens of states.

Matt leads Covington’s state public policy practice, managing complex multistate legislative and regulatory advocacy campaigns. His state-level work includes securing a last-minute amendment to California’s 2023 money transmitter legislation on behalf of a fintech client and representing major technology companies on state AI, autonomous vehicle, and political advertising compliance matters across dozens of jurisdictions.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration under Chairwoman Amy Klobuchar (D-MN), where he negotiated the landmark bipartisan Electoral Count Reform Act – legislation that updated presidential election certification procedures for the first time in nearly 140 years. He also oversaw the Committee’s bipartisan January 6th investigation, developing protocols that resulted in unanimous passage of new Capitol security legislation.

Both in Congress and at Covington, Matt has prepared dozens of corporate executives, nonprofit leaders, academics, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter and strategist who has composed dozens of bills and amendments introduced in Congress and state legislatures, including many that have been enacted into law.

Election and Political Law Compliance and Enforcement

As a member of Covington’s Chambers-ranked (Band 1) Election and Political Law practice, Matt advises businesses, nonprofits, political committees, candidates, and donors on the full range of federal and state political law compliance matters, including:

Election and campaign finance laws
Lobbying disclosure
Government ethics rules
The SEC Pay-to-Play Rule

He also conducts political law due diligence for M&A transactions, counsels major political funders and donors in compliance and enforcement matters, and represents candidates, ballot measure committees, and donors in election disputes and recounts.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.