The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level. 

Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI.  Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue.  And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.

In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation. 

At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking. 

This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.  (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)

The White House

As stated in the Republican Party’s 2024 platform and by the president-elect on the campaign trail, the incoming Trump Administration plans to revoke President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“2023 AI EO”).  The incoming administration also is expected to halt ongoing agency rulemakings related to AI, including a Department of Commerce rulemaking to implement the 2023 AI EO’s dual-use foundation model reporting and red-team testing requirements.  President-elect Trump’s intention to re-nominate Russell Vought as Director of the Office of Management and Budget (“OMB”) suggests that a light-touch approach to AI regulation may be taken across all federal agencies.  As OMB Director in the prior Trump Administration, Vought issued a memo directing federal agencies to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”

Recent OMB efforts to implement the 2023 AI EO also are likely to be narrowed or revoked, including the “minimum risk management practices” for federal agencies’ use and procurement of rights- and safety-impacting AI established under two OMB memoranda issued earlier this year.  In contrast to these detailed testing, documentation, and incident reporting requirements, the incoming administration is likely to return to the approach of the first Trump Administration’s Executive Order No. 13960, which set out nine guiding principles for federal agencies when designing, developing, acquiring, or using AI, including transparency and accountability. 

It is unclear how the Trump Administration will act on the over 100 other federal agency actions that have been completed pursuant to the 2023 AI EO.  Some of these initiatives, such as the National AI Research Resource (“NAIRR”) pilot and the U.S. AI Safety Institute, have attracted bipartisan and industry support and could be maintained by the Trump Administration or codified in future legislation like the CREATE AI Act (H.R. 5077, S. 2714), which advanced out of its respective House and Senate committees earlier this year.  Initiatives related to AI innovation, defense, and workforce talent also may continue, potentially under a new executive order or guidance similar to the first Trump Administration’s Executive Order No. 13859, which established an action plan to protect U.S. AI technologies from competitors and foreign adversaries, directed federal agencies to prioritize AI R&D, and called for increased access to federal data and computing resources for AI research.  We previously covered EO 13859 here.

Although other aspects of the incoming administration’s AI policy agenda remain to be seen, the administration can be expected to take a more industry-friendly approach, as signaled by the president-elect’s announcement of venture capitalist David Sacks as White House “AI & Crypto Czar” and head of the Presidential Council of Advisors for Science & Technology.  Other AI policy priorities for the incoming administration could include expanding energy supply to power AI data centers, as the president-elect indicated on the campaign trail and in his announcement of South Dakota Governor Doug Burgum (R) as his nominee for Interior Secretary and as head of the National Energy Council.  Additionally, Vice President-elect JD Vance has expressed support for open-source AI models to address concentration in the AI industry.  Vivek Ramaswamy, co-chair of the yet-to-be-established “Department of Government Efficiency,” has called for the replacement of legacy technology across the federal government, which could entail increased acquisition and deployment of AI tools by federal agencies.

FTC and DOJ Enforcement

In light of the president-elect’s announcement that he will name Commissioner Andrew Ferguson as Chair of the FTC, replacing incumbent Chair Lina Khan, agency efforts to regulate AI through rulemaking under Section 5 of the FTC Act likely will be wound down.  While Commissioner Ferguson has stated that he will “focus antitrust enforcement against Big Tech monopolies, especially those companies engaged in unlawful censorship,” he has also promised to “end the FTC’s attempt to become an AI regulator” and “terminate all initiatives investigating . . . AI ‘bias.’”  Similarly, although the nomination of Gail Slater, a former FTC attorney and policy advisor to Vice President-elect Vance, as Assistant Attorney General for the Department of Justice (“DOJ”) Antitrust Division suggests that scrutiny of “Big Tech” under current-Assistant Attorney General Jonathan Kanter will remain an important enforcement priority, it is unclear whether the DOJ will continue to focus on AI in other enforcement actions.

The 119th Congress

With Republican majorities in both houses of Congress, incoming-Senate Majority Leader John Thune (R-SD) and House Speaker Mike Johnson (R-Louisiana) are likely to focus Congress’s AI policy agenda on the issues described above, particularly on maintaining U.S. advantages over China in AI models and applications critical to national security and defense.  Members of Congress also may pursue AI legislation that enjoyed bipartisan support in the current Congress, including legislation to codify NAIRR and legislation sponsored by Sen. Thune to codify the U.S. AI Safety Institute.  Congress also is expected to continue legislative efforts to address AI-generated digital replicas and nonconsensual online deepfakes, such as the TAKE IT DOWN Act (S.4569), introduced by Sen. Ted Cruz (R-Tex.)—the likely incoming chair of the Senate Commerce Committee—and passed by the Senate on December 3.  Other potential AI issues for the GOP majority include requirements for federal agency uses of AI and energy infrastructure improvements to support AI development.  On November 14, for example, incoming-Chair of the House Energy & Commerce Committee Brett Guthrie (R-KY) signed a letter urging the Secretary of Energy to consider the “critical energy infrastructure needs” of AI systems.

On December 17, the bipartisan House AI Task Force released its final report on principles, recommendations, and policy proposals for congressional policymaking on AI.  The report’s recommendations, including establishing nationwide protections against digital replicas and deepfakes, considering legislation to address safety risks of advanced AI, and establishing a commission to study cross-sector AI regulation and federal preemption, may shed light on the AI policy goals of the 119th Congress. 

State Legislatures

In 2024, states enacted dozens of AI laws related to deepfakes, digital replicas, generative AI, synthetic content labeling, and high-risk AI systems.  State lawmakers can be expected to continue to pursue legislation on these and other AI issues in 2025—a trend that could be accelerated by a slowdown in prescriptive AI regulatory efforts at the federal level. 

In Texas, Missouri, California, and other states, lawmakers already have drafted or pre-filed AI legislation ahead of the 2025 legislative sessions, including AI consumer protection legislation similar to the Colorado AI Act (SB 205enacted earlier this year.  State lawmakers also are likely to pursue public safety regulations for large AI models in 2025, similar to the Safe & Secure Innovation for Frontier AI Models Act (SB 1047) passed by the California legislature in August.  Following California Governor Gavin Newsom (D)’s veto of SB 1047, supporters of the legislation indicated that they would continue to push for similar guardrails in 2025.

*              *              *

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yaron Dori Yaron Dori

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the…

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee.

Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance.

Early in his career, Yaron advised telecommunications companies and investors on regulatory policy and frameworks that led to the development of broadband networks. When those networks became bidirectional and enabled companies to collect consumer data, he advised those companies on their data privacy and consumer protection obligations. Today, as new technologies such as Artificial Intelligence (AI) are being used to enhance the applications and services offered by such companies, he advises them on associated legal and regulatory obligations and risks. It is this varied background – which tracks the evolution of the technology industry – that enables Yaron to provide clients with a holistic, 360-degree view of technology policy, regulation, compliance, and enforcement.

Yaron represents clients before federal regulatory agencies—including the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Commerce (DOC)—and the U.S. Congress in connection with a range of issues under the Communications Act, the Federal Trade Commission Act, and similar statutes. He also represents clients on state regulatory and enforcement matters, including those that pertain to telecommunications, data privacy, and consumer protection regulation. His deep experience in each of these areas enables him to advise clients on a wide range of technology regulations and key business issues in which these areas intersect.

With respect to technology and telecommunications matters, Yaron advises clients on a broad range of business, policy and consumer-facing issues, including:

  • Artificial Intelligence and the Internet of Things;
  • Broadband deployment and regulation;
  • IP-enabled applications, services and content;
  • Section 230 and digital safety considerations;
  • Equipment and device authorization procedures;
  • The Communications Assistance for Law Enforcement Act (CALEA);
  • Customer Proprietary Network Information (CPNI) requirements;
  • The Cable Privacy Act
  • Net Neutrality; and
  • Local competition, universal service, and intercarrier compensation.

Yaron also has extensive experience in structuring transactions and securing regulatory approvals at both the federal and state levels for mergers, asset acquisitions and similar transactions involving large and small FCC and state communication licensees.

With respect to privacy and consumer protection matters, Yaron advises clients on a range of business, strategic, policy and compliance issues, including those that pertain to:

  • The FTC Act and related agency guidance and regulations;
  • State privacy laws, such as the California Consumer Privacy Act (CCPA) and California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, the Virginia Consumer Data Protection Act, and the Utah Consumer Privacy Act;
  • The Electronic Communications Privacy Act (ECPA);
  • Location-based services that use WiFi, beacons or similar technologies;
  • Digital advertising practices, including native advertising and endorsements and testimonials; and
  • The application of federal and state telemarketing, commercial fax, and other consumer protection laws, such as the Telephone Consumer Protection Act (TCPA), to voice, text, and video transmissions.

Yaron also has experience advising companies on congressional, FCC, FTC and state attorney general investigations into various consumer protection and communications matters, including those pertaining to social media influencers, digital disclosures, product discontinuance, and advertising claims.

Photo of Holly Fechner Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of…

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

  • Develop compelling public policy strategies
  • Research law and draft legislation and policy
  • Draft testimony, comments, fact sheets, letters and other documents
  • Advocate before Congress and the Executive Branch
  • Form and manage coalitions
  • Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, “Dealmaker of the Year” award in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USAAmerica’s Leading Business Lawyers from 2012 to the present. One client noted to Chambers: “Holly is an exceptional attorney who excels in government relations and policy discussions. She has an incisive analytical skill set which gives her the capability of understanding extremely complex legal and institutional matters.” According to another client surveyed by Chambers, “Holly is incredibly intelligent, effective and responsive. She also leads the team in a way that brings out everyone’s best work.”

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.