Photo of August Gweon

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.

White House Receives Public Comments on AI Action Plan

On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6.  As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”

The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies.  The final AI Action Plan is expected by July of 2025.

NIST Launches New AI Standards InitiativesContinue Reading March 2025 AI Developments Under the Trump Administration

On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”  The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” 

Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos.  This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.”  At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).Continue Reading OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies

On March 24, the Senate Judiciary Subcommittee on the Constitution held a hearing on the “Censorship Industrial Complex,” where senators and witnesses expressed divergent views on risks to First Amendment rights.  Senator Eric Schmitt (R-MO), the Subcommittee Chair, began the hearing by warning that the “vast censorship enterprise that the Biden Administration built” has expanded into an “alliance of activists, academics, journalists, big tech companies, and federal bureaucrats” that uses “novel tools and technologies of the 21st century” to silence critics.  Senator Peter Welch (D-VT), the Ranking Member of the Subcommittee, expressed skepticism about alleged censorship by the Biden Administration and social media companies, citing the Supreme Court’s 2024 opinion in Murthy v. Missouri, and accused the Trump Administration of causing “real suppression of free speech.”

The witnesses at the hearing, including law professors, journalists, and an attorney from the Reporters Committee for Freedom of the Press, expressed contrasting views on the state of free expression and risks of censorship.  Mollie Hemingway, the Editor-in-Chief of The Federalist, argued that federal and state governments “fund and promote censorship and blacklisting technology” to undermine free speech in coordination with universities, non-profit entities, and technology companies.  Jonathan Turley, a George Washington University law professor, and Benjamin Weingarten, an investigative journalist, raised similar censorship concerns, with Turley arguing that a “cottage industry of disinformation experts” had “monetized censorship” and adding that the EU’s Digital Services Act presents a “new, emerging threat” to First Amendment rights.Continue Reading Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  The first blog summarized key actions taken in the first weeks of the Trump Administration, including the revocation of President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” and the release of President Trump’s Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (“AI EO”).  This blog describes actions on AI taken by the Trump Administration in February 2025.Continue Reading February 2025 AI Developments Under the Trump Administration

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.Continue Reading State Legislatures Consider New Wave of 2025 AI Legislation

Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips.  Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January.  DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.

The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials.  These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers.  What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.  Continue Reading U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek

This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration.  This blog describes key actions on AI taken by the Trump Administration in January 2025.

Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure

Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration.  On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.”  This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land.  Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration

On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.”  The RFI marks a first step toward the implementation of the Trump Administration’s January 23 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (the “EO”).  Specifically, the EO directs Assistant to the President for Science & Technology (and OSTP Director nominee) Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to “develop and submit to the President an action plan” to achieve the EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance” to “promote human flourishing, economic competitiveness, and national security.” Continue Reading Trump Administration Seeks Public Comment on AI Action Plan

On January 29, Senator Josh Hawley (R-MO) introduced the Decoupling America’s Artificial Intelligence Capabilities from China Act (S. 321), one of the first bills of 119th Congress to address escalating U.S. competition with China on artificial intelligence.  The new legislation comes just days after Chinese AI company DeepSeek launched its R1 AI model with advanced capabilities that has been widely viewed as a possible turning point in the U.S.-China AI race.  If enacted, S. 321 would impose sweeping prohibitions on U.S. imports and exports of AI and generative AI technologies and R&D to and from China and bar U.S. investments in AI technology developed or produced in China.  The bill, which was referred to the Senate Judiciary Committee, had no cosponsors and no House companion at the time of introduction.

Specifically, S. 321 would prohibit U.S. persons—including any corporation or educational or research institution in the U.S. or controlled by U.S. citizens or permanent residents—from (1) exporting AI or generative AI technology or IP to China or (2) importing AI or generative AI technology or IP developed or produced in China.  In addition, the bill would bar U.S. persons from transferring AI or generative AI research to China or Chinese educational institutions, research institutions, corporations, or government entities (“Chinese entities of concern”), or from conducting AI or generative AI R&D within China or for, on behalf of, or in collaboration with such entities.

Finally, the bill would prohibit any U.S. person from financing AI R&D with connections to China.  The bill specifically prohibits U.S. persons from “holding or managing any interest in” or extending loans or lines of credit to Chinese entities of concern that conduct AI- or generative AI-related R&D, produce goods that incorporate AI or generative AI R&D, assist with Chinese military or surveillance capabilities, or are implicated in human rights abuses. Continue Reading Senator Hawley Introduces Sweeping U.S.-China AI Decoupling Bill