Cybersecurity

            On January 6, 2026, the Federal Communications Commission’s Public Safety and Homeland Security Bureau (the “Bureau”) announced the application window for a new Lead Administrator for the U.S. Cyber Trust Mark Program (the “Program”).  The window will be open from January 7, 2026, through January 28, 2026.  The previous Lead Administrator, UL LLC (“UL

Continue Reading FCC Opens Application Window for New Cyber Trust Mark Program Lead Administrator

As the UK Government has recognized, cyber incidents—such as Jaguar Land Rover, Marks and Spencer, Royal Mail and the British Library—are costing UK businesses billions annually and causing severe disruption. The Government recognizes that cybersecurity is a critical enabler of economic growth (“we cannot have growth without stability”), and that the current laws have “fallen out of date and are insufficient to tackle the cyber threats faced by the UK.” Accordingly the UK Government this week published its long-awaited Cyber Security and Resilience Bill (the “Bill”), which will amend the existing Network and Information Systems Regulations 2018 (the “NIS Regulations”), and grant new powers to regulators and the Government in relation to cybersecurity.

The NIS Regulations are the UK’s pre-Brexit implementation of Directive (EU) 2016/1148 (the “NIS Directive”), which established a “horizontal” cybersecurity regulatory framework covering essential services in five sectors (transport, energy, drinking water, health, and digital infrastructure) and some digital services (online marketplaces, online search engines, and cloud computing services). EU legislators replaced NIS Directive in 2022 with the “NIS2” Directive, which Member States were meant to transpose into national law by October of last year (although many are still late in doing so. See our post on NIS2 here for an overview of the requirements of NIS2).

The Bill is the UK’s effort at modernizing the framework originally set out in the NIS Directive. In its current form, the Bill will:

  • Significantly expand the scope of the NIS Regulations—to cover, among other things, data centers and managed service providers—and impose additional substantive obligations on covered organizations.
  • Increase potential fines—up to GBP 17m or 4% of the worldwide turnover of an undertaking—and extend the powers of competent authorities to share information with one another, issue guidance, and take enforcement action.
  • Establish a framework for future changes to the NIS Regulations, mechanisms for competent authorities to impose specific cybersecurity requirements on covered organizations, and greater Government direction of cybersecurity matters.

Below, we set out further detail on five major changes in UK cybersecurity regulation arising from the Bill.Continue Reading Five major changes to the regulation of cybersecurity in the UK under the Cyber Security and Resilience Bill

On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power.  TFAIA is the first frontier model safety legislation in the country to

Continue Reading California Governor Signs Landmark AI Safety Legislation

The EU e-evidence Regulation and Directive, which establish a regime for law enforcement authorities (“LEAs”) in one Member State to issue legally-binding demands for data from certain types of providers established in other Member States, will come into effect on 18 August 2026 (our post on the specific requirements of the Regulation and Directive is available here). On 28 July 2025, the European Commission adopted an Implementing Regulation (“IR”) setting out the technical specifications for the decentralized communications system that LEAs and covered service providers must use when, among other things, issuing and responding to European Production Orders (“EPOs”) and European Preservation Orders (“EPrOs”) under the e-evidence Regulation.Continue Reading European Commission adopts technical standards for the decentralized communication system to be used under the forthcoming e-evidence Regulation

This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.

I. Federal AI Legislative Developments

    In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms.  Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law.  The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI.  Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included.  See our blog post on this topic for more information.

    Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery.  For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI.  See our blog post on this topic for more information.  Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update

    On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda.  In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in the Federal Government,” “Accelerating Federal Permitting of

    Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive Orders

    On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul (D), the RAISE Act would

    Continue Reading New York Legislature Passes Sweeping AI Safety Legislation

    This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

    I. Artificial Intelligence

    A. Federal Legislative Developments

    In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes.  Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud.  Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.

    • CREATE AI Act:  In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year.  The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies.  The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.

    Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

    Updated December 12, 2024. Originally Posted December 10, 2024.

    On December 4, 2024, the Federal Communications Commission (the “Commission”) announced that it had selected UL Solutions to serve as the Lead Administrator for its Internet of Things Cybersecurity Labeling Program (the “IoT Labeling Program”).  The Commission also conditionally approved UL Solutions as a Cybersecurity

    Continue Reading FCC Takes Next Steps Towards U.S. Cyber Trust Mark