On January 6, 2026, the Federal Communications Commission’s Public Safety and Homeland Security Bureau (the “Bureau”) announced the application window for a new Lead Administrator for the U.S. Cyber Trust Mark Program (the “Program”). The window will be open from January 7, 2026, through January 28, 2026. The previous Lead Administrator, UL LLC (“UL
Continue Reading FCC Opens Application Window for New Cyber Trust Mark Program Lead Administrator
Jennifer Johnson
Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington's global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.
Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.
Washington State AI Task Force Releases AI Policy Recommendations for 2026
On December 1, the Washington State AI Task Force (“Task Force”) released its Interim Report with AI policy recommendations to the Governor and legislature. Established by the legislature in 2024, the Task Force is responsible for evaluating current and potential uses of AI in Washington and recommending regulatory and legislative actions to “ensure responsible AI…
Continue Reading Washington State AI Task Force Releases AI Policy Recommendations for 2026U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.
I. Federal AI Legislative Developments
In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms. Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law. The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI. Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included. See our blog post on this topic for more information.
Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery. For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI. See our blog post on this topic for more information. Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update
NIST Welcomes Comments for AI Standards Zero Drafts Project
On July 29, 2025, the National Institute of Standards & Technology (“NIST”) unveiled an outline for preliminary, stakeholder-driven standards, known as a “zero draft”, for AI testing, evaluation, verification and validation (“TEVV”). This outline is part of NIST’s AI Standards Zero Drafts pilot project, which was announced on March 25, 2025, as we previously reported. The goal is to create a flexible, high-level framework for companies to design their own AI testing and validation procedures. Of note, NIST is not prescribing exact methods for testing and validation. Instead, it offers a structure around key terms, lifecycle stages, and guiding principles that align with future international standards. NIST has asked for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, and feedback is open until September 12, 2025.
The NIST outline breaks AI TEVV into several foundational elements, a non-exhaustive list of which includes:Continue Reading NIST Welcomes Comments for AI Standards Zero Drafts Project
Trump Administration Issues AI Action Plan and Series of AI Executive Orders
On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda. In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in the Federal Government,” “Accelerating Federal Permitting of…
Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive OrdersCalifornia Frontier AI Working Group Issues Final Report on Frontier Model Regulation
On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies…
Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model RegulationNew York Legislature Passes Sweeping AI Safety Legislation
On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul (D), the RAISE Act would…
Continue Reading New York Legislature Passes Sweeping AI Safety LegislationOECD Introduces AI Capability Indicators for Policymakers
On June 3, 2025, the OECD introduced a new framework called AI Capability Indicators that compares AI capabilities to human abilities. The framework is intended to help policymakers assess the progress of AI systems and enable informed policy responses to new AI advancements. The indicators are designed to help non-technical policymakers understand the degree of advancement of different AI capabilities. AI researchers, policymakers, and other stakeholder groups, including economists, psychologists, and education specialists, are invited to submit their feedback to the current beta-framework.Continue Reading OECD Introduces AI Capability Indicators for Policymakers
FCC Seeks Public Input on Adding Connected Vehicle Technology to the Covered List
On Friday, May 23, the Federal Communications Commission (the “FCC”) released a Public Notice requesting public input on whether certain CAV-related communications equipment and services with connections to Russia and the People’s Republic of China should be added to the “Covered List” – a list maintained by the FCC of communications equipment and services found…
Continue Reading FCC Seeks Public Input on Adding Connected Vehicle Technology to the Covered ListFCC Proposes Changes to Foreign Ownership Rules and Related Filings Processes
Updated June 24, 2025. Originally posted April 30, 2025.
In April, the Federal Communications Commission (“FCC”) adopted a Notice of Proposed Rulemaking (“NPRM”) that proposes to clarify existing definitions in the FCC’s foreign ownership rules and codify certain practices regarding the filing requirements for, and the agency’s processing of, foreign ownership petitions (Petitions…
Continue Reading FCC Proposes Changes to Foreign Ownership Rules and Related Filings Processes