This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.

I. Federal AI Legislative Developments

    In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms.  Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law.  The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI.  Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included.  See our blog post on this topic for more information.

    Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery.  For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI.  See our blog post on this topic for more information.  Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).

    II. Federal AI Regulatory Developments

      The Trump Administration took significant steps to advance its AI policy agenda in recent months through Executive Orders.  On July 23, the White House released its 28-page AI Action Plan, titled “Winning the Race: America’s AI Action Plan,” with 103 specific AI policy recommendations for “near-term execution by the Federal government.” The AI Action Plan is organized under the three pillars – (1) accelerating AI innovation, (2) building American AI infrastructure, and (3) leading in international AI diplomacy and security.  On July 23, President Trump also signed three Executive Orders – Executive Order 14318 on “Accelerating Federal Permitting of Data Center Infrastructure,” Executive Order 14319 on “Preventing Woke AI in the Federal Government,” and Executive Order 14320 on “Promoting the Export of the American AI Technology Stack” – to implement the AI Action Plan’s key priorities.  See our blog post for more information on these developments.  Additionally, on June 6, President Trump signed Executive Order 14306 on “Sustaining Select Efforts To Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144,” which, according to the White House, “refocuses artificial intelligence (AI) cybersecurity efforts towards identifying and managing vulnerabilities, rather than censorship.”  

      Other parts of the Executive Branch have taken notable steps towards addressing the development and use of AI by the government and industry.  For example, in April, the White House Office of Management and Budget issued two memoranda, which outline AI risk management and procurement requirements, respectively.  Further, the Department of Commerce announced a plan on June 3 to “reform” the Biden-era U.S. AI Safety Institute to create the “Center for AI Standards and Innovation” (“CAISI”).  According to the Department’s press release, the rebranded CAISI will “serve as industry’s primary point of contact within the U.S. Government to facilitate testing and collaborative research” on commercial AI and will “represent U.S. interests internationally.”  

      III. State AI Legislative Developments

        State lawmakers proposed hundreds of AI bills in the first half of 2025, including at least 61 new state AI laws in 28 states.  Key themes include the following:

        • Comprehensive Consumer Protection Frameworks:  In Texas, the legislature passed, and Governor Greg Abbott (R) signed, the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149), which will prohibit the development or deployment of AI systems with the “intent” or “sole intent” to engage in certain activities, such as to incite self-harm, among other requirements.  We discuss TRAIGA in this blog post.
        • Frontier Model Public Safety:  The New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S6953), a frontier model public safety bill that would establish reporting, disclosure, and risk management requirements for “large developers” of frontier AI models.  Additionally, on June 17, the Joint California Policy Working Group on AI Frontier Models issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  More on the final report can be found in this blog post.
        • Chatbot and Generative AI:  Several states enacted laws regulating the development or deployment of AI-powered chatbots and generative AI systems.  For example, Maine’s governor signed LD 1727, prohibiting the use of AI chatbots to “engage in trade and commerce with a consumer” in a manner that may mislead or deceive consumers into believing they are engaging with a human unless the use of AI is disclosed.
        • Synthetic Content and Content Moderation:  State legislatures passed over 30 new laws to regulate the creation, distribution, or use of synthetic or AI-generated content, including, for example, laws prohibiting the creation or distribution of AI-generated CSAM signed in Arkansas, Colorado, and Texas.  Various states also enacted laws prohibiting the distribution of AI-generated intimate imagery without consent, including Connecticut, Tennessee, and North Dakota.  The Governor of Texas also signed several bills into law related to age verification, take down, and consent requirements for online platforms and websites that host prohibited AI-generated content or AI tools for generating such content. 
        • AI in Healthcare:  Illinois, Nevada, Oregon, and Texas enacted legislation to regulate the use of AI in healthcare settings.  For example, the Governor of Illinois signed the Wellness and Oversight for Psychological Resources Act (HB 1806) into law, which prohibits licensed therapists from using AI to make independent therapeutic decisions, directly interact with clients for therapeutic communication, generate therapeutic recommendations or treatment plans without licensed professional review and approval, or detect emotions or mental states. 

        IV. Connected & Automated Vehicles

        Federal regulators made several significant announcements related to CAVs in recent months.  For example, on April 24, Secretary of Transportation Sean Duffy announced the National Highway Traffic Safety Administration (“NHTSA”)’s new Automated Vehicle (“AV”) Framework with the goal of enabling the growth of the AV industry and removing government barriers to innovation.  The first actions under the framework are intended to modernize the Federal Motor Vehicle Safety Standards (“FMVSS”) to enable commercial deployment of AVs.  Relatedly, Secretary Duffy announced that NHTSA will update and streamline its exemption process under Part 555 of the NHTSA’s vehicle safety regulations, with the goal of accelerating AV deployment.  Part 555 exemptions allow manufacturers to sell vehicles that do not fully comply with the FMVSS, such as vehicles without traditional steering wheels.  However, because the Part 555 exemption process has historically been challenging and time-intensive for AVs, a streamlined exemption process is expected to promote AV development and adoption.  NHTSA granted its second-ever exemption after the reforms to the Part 555 exemption process were announced.

        V. Internet of Things

        Federal regulators made several updates to regulations regarding the Internet of Things (“IoT”) in the first half of 2025.  In May 2025, the National Institute of Standards and Technology (“NIST”) announced the first revision to NIST IR 8259 Foundational Cybersecurity Activities for IoT Product Manufacturers, which proposes guidance to help clarify data management across AI components, among other topics.  Additionally, in June, NIST released an essay describing its proposed updates in the NIST SP 800-213 IoT Device Cybersecurity Guidance for the Federal Government: Establishing IoT Device Cybersecurity Requirements, which propose updates to cybersecurity risks for IoT product adoption and integration, among other updates.

        VI. Cryptocurrency & Blockchain

        U.S. lawmakers and regulators continued to focus on reshaping the cryptocurrency landscape through various legislative and regulatory efforts, underscoring bipartisan momentum on the need for a robust and durable U.S. framework for digital assets.  Notably, President Trump signed the Guiding and Establishing National Innovation for U.S. Stablecoins (“GENIUS”) Act (S. 1582) into law, which is the first federal digital asset law enacted in the United States.  The Act imposes federal prudential and customer protection standards on stablecoin issuers, authorizes and integrates state regulatory regimes for certain issuers within the federal regulatory framework, and grants regulatory agencies authority over certain aspects of issuer activity.  Congress has continued to focus other legislation on cryptocurrency, and the House passed two digital asset bills – the Digital Asset Market Clarity Act (“CLARITY Act”) (H.R. 3633) and Anti-CBDC Surveillance State Act (H.R. 1919).

        Federal regulatory developments complemented Congress’s efforts to promote cryptocurrency adoption.  For example, the Department of Justice disbanded its National Cryptocurrency Enforcement Team, and President Trump signed a Congressional Review Act repeal of the Internal Revenue Service’s decentralized finance (“DeFi”) “broker rule.”  Further, the Federal Deposit Insurance Corporation (“FDIC”) replaced prior guidance on crypto-related activities with FIL‑7‑2025, allowing FDIC‑insured banks to engage in a broad array of crypto‑related activities if internal risk management is adequate, without prior approval.  The Securities and Exchange Commission (“SEC”) continued to reassess its approach to crypto, including through public roundtables held by its Crypto Task Force.  In April, the SEC issued a policy statement clarifying that “covered stablecoins” (USD‑1:1, fully backed, redeemable) are not considered securities under federal rules.  Additionally, the Federal Reserve announced that it was discontinuing the supervisory program that it had created specifically for crypto activities conducted by banks. 

        We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

        Print:
        Email this postTweet this postLike this postShare this post on LinkedIn
        Photo of Jennifer Johnson Jennifer Johnson

        Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

        Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

        Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

        Photo of Nicholas Xenakis Nicholas Xenakis

        Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

        Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving…

        Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

        Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving antitrust, artificial intelligence, bankruptcy, criminal justice, financial services, immigration, intellectual property, life sciences, national security, and technology. He also represents companies and individuals in investigations before U.S. Senate and House Committees.

        Nick previously served as General Counsel for the U.S. Senate Judiciary Committee, where he managed committee staff and directed legislative efforts. He also participated in key judicial and Cabinet confirmations, including of Attorneys General and Supreme Court Justices. Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia.

        Photo of Mike Nonaka Mike Nonaka

        Michael Nonaka, co-chair of the firm’s Fintech Initiative and former co-chair of the Financial Services Group, advises banks, financial services providers, fintech companies, and commercial companies on a broad range of compliance, enforcement, transactional, and legislative matters.

        He specializes in providing advice relating…

        Michael Nonaka, co-chair of the firm’s Fintech Initiative and former co-chair of the Financial Services Group, advises banks, financial services providers, fintech companies, and commercial companies on a broad range of compliance, enforcement, transactional, and legislative matters.

        He specializes in providing advice relating to federal and state licensing and applications matters for banks and other financial institutions, the development of partnerships and platforms to provide innovative financial products and services, and a broad range of compliance areas such as anti-money laundering, financial privacy, cybersecurity, and consumer protection. He also works closely with banks and their directors and senior leadership teams on sensitive supervisory and strategic matters.

        Mike works with a number of banks, lending companies, money transmitters, payments firms, technology companies, and service providers on innovative technologies such as bitcoin and other cryptocurrencies, blockchain, big data, cloud computing, same day payments, and online lending. He has assisted numerous banks and fintech companies with the launch of innovative deposit and loan products, technology services, and cryptocurrency-related products and services.

        Mike has advised a number of clients on compliance with TILA, ECOA, TISA, HMDA, FCRA, EFTA, GLBA, FDCPA, CRA, BSA, USA PATRIOT Act, FTC Act, Reg. K, Reg. O, Reg. W, Reg. Y, state money transmitter laws, state licensed lender laws, state unclaimed property laws, state prepaid access laws, and other federal and state laws and regulations.

        Photo of Jayne Ponder Jayne Ponder

        Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

        Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

        Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

        Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

        Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

        As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

        Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

        Photo of August Gweon August Gweon

        August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

        August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

        August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

        Photo of Jess Gonzalez Valenzuela Jess Gonzalez Valenzuela

        Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s San Francisco office, specializing in the Data Privacy and Cybersecurity Practice Group. Jess assists clients with cybersecurity issues such as incident response, risk management, internal investigations, and regulatory compliance. Additionally, Jess…

        Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s San Francisco office, specializing in the Data Privacy and Cybersecurity Practice Group. Jess assists clients with cybersecurity issues such as incident response, risk management, internal investigations, and regulatory compliance. Additionally, Jess supports clients navigating complex data privacy challenges by offering regulatory compliance guidance tailored to specific business practices. Jess is also a member of the E-Discovery, AI, and Information Governance Practice Group and maintains an active pro bono practice.

        Jess is committed to Diversity, Equity, and Inclusion (DEI) initiatives within the legal field. They are a member of Covington’s LGBTQ+ and Latino Firm Resource Groups, and serve as is co-lead for the First Generation Professionals Network and Disability and Neurodiversity Network in the San Francisco office.

        Photo of Conor Kane Conor Kane

        Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

        Before joining Covington, Conor worked in…

        Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

        Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.

        Photo of Max Larson Max Larson

        Max Larson is an associate in the firm’s Washington, DC office. She is a member of the Commercial Litigation Group and the Technology and Communications Regulation Group.

        Photo of McCall Wells McCall Wells

        McCall Wells is an associate in the firm’s San Francisco office. Her practice focuses on matters related to technology transactions and technology regulation.

        McCall also maintains an active pro bono practice, with a particular focus on immigration law. She also advises nonprofit companies…

        McCall Wells is an associate in the firm’s San Francisco office. Her practice focuses on matters related to technology transactions and technology regulation.

        McCall also maintains an active pro bono practice, with a particular focus on immigration law. She also advises nonprofit companies on corporate governance and IP concerns.

        McCall earned her J.D. from the Georgetown University Law Center, where she was a Global Law Scholar and student attorney in the Communications & Technology Law Clinic. Prior to joining the firm, McCall was a fellow at a trade association focused on developing responsible regulation for digital assets. She has experience advocating on behalf of technology companies before state and federal agencies.

        Photo of Grace Howard Grace Howard

        Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters…

        Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters involving allegations of noncompliance with U.S. government cybersecurity regulations and fraud under the False Claims Act.

        Prior to joining the firm, Grace served in the United States Navy as a Surface Warfare Officer and currently serves in the U.S. Navy Reserve.