On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.

This legislation would require developers of large AI models to implement certain safeguards before training and deploying those models, and to report safety incidents involving AI technologies.  The bill would give the California Attorney General civil enforcement authority over violations and establish a new “Frontier Model Division” within the Department of Technology to aid enforcement. 

At the hearing, witnesses—including Encode Justice, the Center for AI Safety, and Economic Security California—and legislators praised the bill’s goal of regulating large AI models while also expressing concerns about the feasibility of enforcement and potential effects on AI innovation.  The Chamber of Progress and California Chamber of Commerce (CalChamber) testified in opposition to the bill.  A coalition of advocacy and industry groups, led by CalChamber, has also signed a letter opposing the bill.

Covered Models.  Mirroring the White House’s 2023 Executive Order, SB 1047 would regulate developers of “covered models” trained on computers with processing power above certain thresholds, while also covering models of “similar or greater performance.”  Developers would also be prohibited from training or deploying a covered model that presents an unreasonable risk of “critical harm,” such as the creation or use of weapons of mass destruction, cybersecurity attacks causing catastrophic damages (greater than $500 million), activities undertaken by AI that cause mass casualties or catastrophic damages (greater than $500 million) and that would be criminal conduct if committed by humans, or other severe threats to public safety.

AI Developer Pre-Training Requirements.  SB 1047 would establish a set of requirements for developers of covered models that apply before a covered model is trained, including:  

  • Positive Safety Determinations.  Developers would be required to assess whether a model will have lower performance than covered models and lacks “hazardous capabilities.”  Models with such determinations are exempt from the bill’s requirements.
  • Protections & Safeguards.  Developers would be required to implement cybersecurity protections against misuse, ensure models can be fully shutdown, and follow industry best practices and NIST and Frontier Model Division guidance.
  • Safety & Security Protocols.  Developers would be required to implement, for each covered model, a “safety and security protocol” with assurances of safeguards, the requirements that apply to the developer, and procedures to test the model’s safety.

AI Developer Pre-Deployment Requirements.  After training a covered model, SB 1047 would require developers to perform “capability testing” to assess whether a positive safety determination is warranted.  If not, developers would be required to implement safeguards that prevent harmful uses and ensure a model’s actions and “resulting critical harms can be accurately and reliably attributed” to the model and responsible users.

AI Developer Ongoing Requirements.  SB 1047 would also establish ongoing obligations for developers, including annual reviews of safety and security protocols, annual certifications of compliance to the Frontier Model Division, periodic reviews of procedures, policies, and safeguards, and reporting of “AI safety incidents” within 72 hours of learning of the incident.

Whistleblower Protections.  SB 1047 would prohibit developers from preventing employees from disclosing information to the California Attorney General indicating a developer’s noncompliance, or from retaliating against employees who do so. 

SB 1047 has a long way to go before becoming law.  Should it be enacted, however, it could—like California’s comprehensive privacy legislation before it—become the de facto standard for AI regulation in the United States, filling the void created in the absence of comprehensive federal AI legislation. 

We are closely monitoring these and related state AI developments as they unfold.  A summary of key themes in recent state AI bills is available here, along with our overview of recent state synthetic media and generative AI legislation here.  We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Holly Fechner Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of…

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

Develop compelling public policy strategies
Research law and draft legislation and policy
Draft testimony, comments, fact sheets, letters and other documents
Advocate before Congress and the Executive Branch
Form and manage coalitions
Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, “Dealmaker of the Year” award in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USA – America’s Leading Business Lawyers from 2012 to the present. One client noted to Chambers: “Holly is an exceptional attorney who excels in government relations and policy discussions. She has an incisive analytical skill set which gives her the capability of understanding extremely complex legal and institutional matters.” According to another client surveyed by Chambers, “Holly is incredibly intelligent, effective and responsive. She also leads the team in a way that brings out everyone’s best work.”

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate…

Matthew Shapanka is a strategic policy and regulatory attorney who helps technology companies and other businesses navigate complex, high-stakes legislative, regulatory, and enforcement matters at the intersection of law and politics. Drawing on 15+ years of experience across private practice, the U.S. Senate, state government, and political campaigns, Matt develops comprehensive policy strategies that identify regulatory risks and position clients to shape policy outcomes.

Public Policy and Regulatory Strategy

Matt serves as a strategic advisor to Fortune 200 companies on emerging technology policy, including artificial intelligence regulation, connected and autonomous vehicles, semiconductors, IoT, and national security matters. He translates complex legal and technical issues into actionable legislative and regulatory strategy, building the policy frameworks and advocacy infrastructure that enable clients to influence policy. He develops policy collateral for federal, state, and international advocacy, coordinates multi-stakeholder coalitions, and represents clients before Congress, federal agencies, and state legislative and regulatory bodies.

His technology policy experience includes securing unprecedented Presidential intervention in the $118 billion Qualcomm-Broadcom transaction (for which Covington was recognized as The American Lawyer 2019 “Dealmakers of the Year”), advising Fortune 200 companies on Bureau of Industry and Security connected vehicle rules, and counseling major internet platforms on autonomous vehicle policy across dozens of states.

Matt leads Covington’s state public policy practice, managing complex multistate legislative and regulatory advocacy campaigns. His state-level work includes securing a last-minute amendment to California’s 2023 money transmitter legislation on behalf of a fintech client and representing major technology companies on state AI, autonomous vehicle, and political advertising compliance matters across dozens of jurisdictions.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration under Chairwoman Amy Klobuchar (D-MN), where he negotiated the landmark bipartisan Electoral Count Reform Act – legislation that updated presidential election certification procedures for the first time in nearly 140 years. He also oversaw the Committee’s bipartisan January 6th investigation, developing protocols that resulted in unanimous passage of new Capitol security legislation.

Both in Congress and at Covington, Matt has prepared dozens of corporate executives, nonprofit leaders, academics, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter and strategist who has composed dozens of bills and amendments introduced in Congress and state legislatures, including many that have been enacted into law.

Election and Political Law Compliance and Enforcement

As a member of Covington’s Chambers-ranked (Band 1) Election and Political Law practice, Matt advises businesses, nonprofits, political committees, candidates, and donors on the full range of federal and state political law compliance matters, including:

Election and campaign finance laws
Lobbying disclosure
Government ethics rules
The SEC Pay-to-Play Rule

He also conducts political law due diligence for M&A transactions, counsels major political funders and donors in compliance and enforcement matters, and represents candidates, ballot measure committees, and donors in election disputes and recounts.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.