On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies
Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation
Jennifer Johnson
Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington's global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.
Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.
New York Legislature Passes Sweeping AI Safety Legislation
On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul (D), the RAISE Act would…
Continue Reading New York Legislature Passes Sweeping AI Safety LegislationOECD Introduces AI Capability Indicators for Policymakers
On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.Continue Reading OECD Introduces AI Capability Indicators for Policymakers
FCC Seeks Public Input on Adding Connected Vehicle Technology to the Covered List
On Friday, May 23, the Federal Communications Commission (the “FCC”) released a Public Notice requesting public input on whether certain CAV-related communications equipment and services with connections to Russia and the People’s Republic of China should be added to the “Covered List” – a list maintained by the FCC of communications equipment and services found…
Continue Reading FCC Seeks Public Input on Adding Connected Vehicle Technology to the Covered ListFCC Proposes Changes to Foreign Ownership Rules and Related Filings Processes
Updated June 24, 2025. Originally posted April 30, 2025.
In April, the Federal Communications Commission (“FCC”) adopted a Notice of Proposed Rulemaking (“NPRM”) that proposes to clarify existing definitions in the FCC’s foreign ownership rules and codify certain practices regarding the filing requirements for, and the agency’s processing of, foreign ownership petitions (Petitions…
Continue Reading FCC Proposes Changes to Foreign Ownership Rules and Related Filings ProcessesU.S. Tech Legislative & Regulatory Update – First Quarter 2025
This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
A. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes. Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud. Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.
- Deepfake Regulation: In February, the Senate passed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (S. 146), following its unanimous passage by the Senate in 2024. The Act would prohibit the nonconsensual disclosure of AI-generated intimate imagery and require platforms to remove such content published on the platform. The House version of the TAKE IT DOWN Act (H.R. 633) has been referred to the House Energy & Commerce Committee.
- CREATE AI Act: In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year. The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies. The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.
Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025
California Frontier AI Working Group Issues Report on Foundation Model Regulation
On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.” The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047). The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.
Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation
State Legislatures Consider New Wave of 2025 AI Legislation
State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025. As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation. Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.Continue Reading State Legislatures Consider New Wave of 2025 AI Legislation
Trump Administration Asserts Presidential Authority Over Independent Agencies
Yesterday, the Trump Administration issued an Executive Order titled “Ensuring Accountability for All Agencies” (the EO). The EO asserts Presidential authority over independent agencies, including the Federal Trade Commission (FTC), Federal Communications Commission (FCC), and Securities and Exchange Commission (SEC). While the precise impacts remain to be seen, overall the EO will likely result in greater involvement by the White House in policymaking at independent agencies, both in substance and process.
OIRA Review of Agency Regulations. The EO amends the Clinton Administration-era Executive Order 12866, which established a review process for regulations promulgated by executive branch departments and agencies but excluded independent agencies from that process. The process includes requirements that departments and agencies submit “significant regulatory actions” to the Office of Information and Regulatory Affairs (OIRA) for review before publication in the Federal Register. Executive Order 12866 defines “significant regulatory action” to mean “any regulatory action that is likely to result in a rule that may:”Continue Reading Trump Administration Asserts Presidential Authority Over Independent Agencies
Trump Administration Seeks Public Comment on AI Action Plan
On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.” The RFI marks a first step toward the implementation of the Trump Administration’s January 23 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (the “EO”). Specifically, the EO directs Assistant to the President for Science & Technology (and OSTP Director nominee) Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to “develop and submit to the President an action plan” to achieve the EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance” to “promote human flourishing, economic competitiveness, and national security.” Continue Reading Trump Administration Seeks Public Comment on AI Action Plan