Photo of August Gweon

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration.  This blog describes key actions on AI taken by the Trump Administration in January 2025.

Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure

Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration.  On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.”  This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land.  Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration

On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.”  The RFI marks a first step toward the implementation of the Trump Administration’s January 23 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (the “EO”).  Specifically, the EO directs Assistant to the President for Science & Technology (and OSTP Director nominee) Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to “develop and submit to the President an action plan” to achieve the EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance” to “promote human flourishing, economic competitiveness, and national security.” Continue Reading Trump Administration Seeks Public Comment on AI Action Plan

On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence.  The new legislation comes just days after Chinese AI company DeepSeek launched its R1 AI model with advanced capabilities that has been widely viewed as a possible turning point in the U.S.-China AI race.  If enacted, S. 321 would impose sweeping prohibitions on U.S. imports and exports of AI and generative AI technologies and R&D to and from China and bar U.S. investments in AI technology developed or produced in China.  The bill, which was referred to the Senate Judiciary Committee, had no cosponsors and no House companion at the time of introduction.

Specifically, S. 321 would prohibit U.S. persons—including any corporation or educational or research institution in the U.S. or controlled by U.S. citizens or permanent residents—from (1) exporting AI or generative AI technology or IP to China or (2) importing AI or generative AI technology or IP developed or produced in China.  In addition, the bill would bar U.S. persons from transferring AI or generative AI research to China or Chinese educational institutions, research institutions, corporations, or government entities (“Chinese entities of concern”), or from conducting AI or generative AI R&D within China or for, on behalf of, or in collaboration with such entities.

Finally, the bill would prohibit any U.S. person from financing AI R&D with connections to China.  The bill specifically prohibits U.S. persons from “holding or managing any interest in” or extending loans or lines of credit to Chinese entities of concern that conduct AI- or generative AI-related R&D, produce goods that incorporate AI or generative AI R&D, assist with Chinese military or surveillance capabilities, or are implicated in human rights abuses. Continue Reading Senator Hawley Introduces Sweeping U.S.-China AI Decoupling Bill

On January 14, 2025, the Biden Administration issued an Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure” (the “EO”), with the goals of preserving U.S. economic competitiveness and access to powerful AI models, preventing U.S. dependence on foreign infrastructure, and promoting U.S. clean energy production to power the development and operation of AI.  Pursuant to these goals, the EO outlines criteria and timeframes for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy resources, by private-sector entities on federal land.  The EO builds upon a series of actions on AI issued by the Biden Administration, including the October 2023 Executive Order on Safe, Secure, and Trustworthy AI and an October 2024 AI National Security Memorandum.Continue Reading Biden Administration Releases Executive Order on AI Infrastructure

The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level. 

Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI.  Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue.  And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.

In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation. 

At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking. 

This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025.  (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)Continue Reading U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States

This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence (“AI”) and connected and automated vehicles (“CAVs”).  As noted below, some of these developments provide industry with the opportunity for participation and comment.

I.     Artificial Intelligence

Federal Legislative Developments

There continued to be strong bipartisan

Continue Reading U.S. Tech Legislative, Regulatory & Litigation Update – Third Quarter 2024

On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.Continue Reading Texas Legislature to Consider Sweeping AI Legislation in 2025

On September 29, California Governor Gavin Newsom (D) vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), putting an end, for now, to a months-long effort to establish public safety standards for developers of large AI systems.  SB 1047’s sweeping AI safety and security regime, which included annual third-party safety audits, shutdown capabilities, detailed safety and security protocols, and incident reporting requirements, would likely have established a de facto national safety standard for large AI models if enacted.  The veto followed rare public calls from Members of California’s congressional delegation—including Speaker Emerita Nancy Pelosi (D-CA) and Representatives Ro Khanna (D-CA), Anna Eschoo (D-CA), Zoe Lofgren (D-CA), and Jay Obernolte (R-CA)—for the governor to reject the bill.

In his veto message, Governor Newsom noted that “[AI] safety protocols must be adopted” with “[p]roactive guardrails” and “severe consequences for bad actors,” but he criticized SB 1047 for regulating based on the “cost and number of computations needed to develop an AI model.” SB 1047 would have defined “covered models” as AI models trained using more than 1026 FLOPS of computational power valued at more than $100 million.  In relying on cost and computing thresholds rather than “the system’s actual risks,” Newsom argued that SB 1047 “applies stringent standards to even the most basic functions–so long as a large system deploys it.”  Newsom added that SB 1047 could “give the public a false sense of security about controlling this fast-moving technology” while “[s]maller, specialized models” could be “equally or even more dangerous than the models targeted by SB 1047.” Continue Reading California Governor Vetoes AI Safety Bill

On August 29, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking yet another major development in states’ efforts to regulate AI.  The legislation, which draws on concepts from the White House’s 2023 AI Executive Order (“AI EO”), follows months of high-profile debate and amendments and would establish an expansive AI safety and security regime for developers of “covered models.”  Governor Gavin Newsom (D) has until September 30 to sign or veto the bill. 

If signed into law, SB 1047 would join Colorado’s SB 205—the landmark AI anti-discrimination law passed in May and covered here—as another de facto standard for AI legislation in the United States in the absence of congressional action.  In contrast to Colorado SB 205’s focus on algorithmic discrimination risks for consumers, however, SB 1047 would address AI models that are technically capable of causing or materially enabling “critical harms” to public safety. Continue Reading California Legislature Passes Landmark AI Safety Legislation

Updated September 20, 2024.  Originally posted September 11, 2024.

On September 17, California Governor Gavin Newsom (D) signed two bills into law that limit the creation or use of “digital replicas,” making California the latest state to establish new protections for performers, artists, and other employees in response to the rise of AI-generated content.  These state efforts come as Congress considers the NO FAKES Act (S. 4875), introduced by Senator Chris Coons (D-DE) on July 31, which would establish a federal “digital replication right” over individual’s own digital replicas and impose liability on persons who knowingly create, display, or distribute digital replicas without consent from the right holder.Continue Reading California Enacts Digital Replica Laws as Congress Considers Federal Approach