This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.
I. Federal AI Legislative Developments
In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms. Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law. The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI. Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included. See our blog post on this topic for more information.
Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery. For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI. See our blog post on this topic for more information. Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).
II. Federal AI Regulatory Developments
The Trump Administration took significant steps to advance its AI policy agenda in recent months through Executive Orders. On July 23, the White House released its 28-page AI Action Plan, titled “Winning the Race: America’s AI Action Plan,” with 103 specific AI policy recommendations for “near-term execution by the Federal government.” The AI Action Plan is organized under the three pillars – (1) accelerating AI innovation, (2) building American AI infrastructure, and (3) leading in international AI diplomacy and security. On July 23, President Trump also signed three Executive Orders – Executive Order 14318 on “Accelerating Federal Permitting of Data Center Infrastructure,” Executive Order 14319 on “Preventing Woke AI in the Federal Government,” and Executive Order 14320 on “Promoting the Export of the American AI Technology Stack” – to implement the AI Action Plan’s key priorities. See our blog post for more information on these developments. Additionally, on June 6, President Trump signed Executive Order 14306 on “Sustaining Select Efforts To Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144,” which, according to the White House, “refocuses artificial intelligence (AI) cybersecurity efforts towards identifying and managing vulnerabilities, rather than censorship.”
Other parts of the Executive Branch have taken notable steps towards addressing the development and use of AI by the government and industry. For example, in April, the White House Office of Management and Budget issued two memoranda, which outline AI risk management and procurement requirements, respectively. Further, the Department of Commerce announced a plan on June 3 to “reform” the Biden-era U.S. AI Safety Institute to create the “Center for AI Standards and Innovation” (“CAISI”). According to the Department’s press release, the rebranded CAISI will “serve as industry’s primary point of contact within the U.S. Government to facilitate testing and collaborative research” on commercial AI and will “represent U.S. interests internationally.”
III. State AI Legislative Developments
State lawmakers proposed hundreds of AI bills in the first half of 2025, including at least 61 new state AI laws in 28 states. Key themes include the following:
- Comprehensive Consumer Protection Frameworks: In Texas, the legislature passed, and Governor Greg Abbott (R) signed, the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149), which will prohibit the development or deployment of AI systems with the “intent” or “sole intent” to engage in certain activities, such as to incite self-harm, among other requirements. We discuss TRAIGA in this blog post.
- Frontier Model Public Safety: The New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S6953), a frontier model public safety bill that would establish reporting, disclosure, and risk management requirements for “large developers” of frontier AI models. Additionally, on June 17, the Joint California Policy Working Group on AI Frontier Models issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. More on the final report can be found in this blog post.
- Chatbot and Generative AI: Several states enacted laws regulating the development or deployment of AI-powered chatbots and generative AI systems. For example, Maine’s governor signed LD 1727, prohibiting the use of AI chatbots to “engage in trade and commerce with a consumer” in a manner that may mislead or deceive consumers into believing they are engaging with a human unless the use of AI is disclosed.
- Synthetic Content and Content Moderation: State legislatures passed over 30 new laws to regulate the creation, distribution, or use of synthetic or AI-generated content, including, for example, laws prohibiting the creation or distribution of AI-generated CSAM signed in Arkansas, Colorado, and Texas. Various states also enacted laws prohibiting the distribution of AI-generated intimate imagery without consent, including Connecticut, Tennessee, and North Dakota. The Governor of Texas also signed several bills into law related to age verification, take down, and consent requirements for online platforms and websites that host prohibited AI-generated content or AI tools for generating such content.
- AI in Healthcare: Illinois, Nevada, Oregon, and Texas enacted legislation to regulate the use of AI in healthcare settings. For example, the Governor of Illinois signed the Wellness and Oversight for Psychological Resources Act (HB 1806) into law, which prohibits licensed therapists from using AI to make independent therapeutic decisions, directly interact with clients for therapeutic communication, generate therapeutic recommendations or treatment plans without licensed professional review and approval, or detect emotions or mental states.
IV. Connected & Automated Vehicles
Federal regulators made several significant announcements related to CAVs in recent months. For example, on April 24, Secretary of Transportation Sean Duffy announced the National Highway Traffic Safety Administration (“NHTSA”)’s new Automated Vehicle (“AV”) Framework with the goal of enabling the growth of the AV industry and removing government barriers to innovation. The first actions under the framework are intended to modernize the Federal Motor Vehicle Safety Standards (“FMVSS”) to enable commercial deployment of AVs. Relatedly, Secretary Duffy announced that NHTSA will update and streamline its exemption process under Part 555 of the NHTSA’s vehicle safety regulations, with the goal of accelerating AV deployment. Part 555 exemptions allow manufacturers to sell vehicles that do not fully comply with the FMVSS, such as vehicles without traditional steering wheels. However, because the Part 555 exemption process has historically been challenging and time-intensive for AVs, a streamlined exemption process is expected to promote AV development and adoption. NHTSA granted its second-ever exemption after the reforms to the Part 555 exemption process were announced.
V. Internet of Things
Federal regulators made several updates to regulations regarding the Internet of Things (“IoT”) in the first half of 2025. In May 2025, the National Institute of Standards and Technology (“NIST”) announced the first revision to NIST IR 8259 Foundational Cybersecurity Activities for IoT Product Manufacturers, which proposes guidance to help clarify data management across AI components, among other topics. Additionally, in June, NIST released an essay describing its proposed updates in the NIST SP 800-213 IoT Device Cybersecurity Guidance for the Federal Government: Establishing IoT Device Cybersecurity Requirements, which propose updates to cybersecurity risks for IoT product adoption and integration, among other updates.
VI. Cryptocurrency & Blockchain
U.S. lawmakers and regulators continued to focus on reshaping the cryptocurrency landscape through various legislative and regulatory efforts, underscoring bipartisan momentum on the need for a robust and durable U.S. framework for digital assets. Notably, President Trump signed the Guiding and Establishing National Innovation for U.S. Stablecoins (“GENIUS”) Act (S. 1582) into law, which is the first federal digital asset law enacted in the United States. The Act imposes federal prudential and customer protection standards on stablecoin issuers, authorizes and integrates state regulatory regimes for certain issuers within the federal regulatory framework, and grants regulatory agencies authority over certain aspects of issuer activity. Congress has continued to focus other legislation on cryptocurrency, and the House passed two digital asset bills – the Digital Asset Market Clarity Act (“CLARITY Act”) (H.R. 3633) and Anti-CBDC Surveillance State Act (H.R. 1919).
Federal regulatory developments complemented Congress’s efforts to promote cryptocurrency adoption. For example, the Department of Justice disbanded its National Cryptocurrency Enforcement Team, and President Trump signed a Congressional Review Act repeal of the Internal Revenue Service’s decentralized finance (“DeFi”) “broker rule.” Further, the Federal Deposit Insurance Corporation (“FDIC”) replaced prior guidance on crypto-related activities with FIL‑7‑2025, allowing FDIC‑insured banks to engage in a broad array of crypto‑related activities if internal risk management is adequate, without prior approval. The Securities and Exchange Commission (“SEC”) continued to reassess its approach to crypto, including through public roundtables held by its Crypto Task Force. In April, the SEC issued a policy statement clarifying that “covered stablecoins” (USD‑1:1, fully backed, redeemable) are not considered securities under federal rules. Additionally, the Federal Reserve announced that it was discontinuing the supervisory program that it had created specifically for crypto activities conducted by banks.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs.