Photo of August Gweon

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems.  The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new state AI laws and the

Continue Reading House Republicans Push for 10-Year Moratorium on State AI Laws

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in April 2025, and prior articles in this series are available here.

White House OMB Issues AI Use & Procurement Requirements for Federal Agencies

On April 3, the White House Office of Management & Budget (“OMB”) issued two memoranda on the use and procurement of AI by federal agencies: Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  The two memos partially implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence,” which, among other things, directs OMB to revise the Biden OMB AI Memos to align with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”  The OMB AI Use Memo outlines agency governance and risk management requirements for the use of AI, including AI use case inventories and generative AI policies, and establishes “minimum risk management practices” for “high-impact AI use cases.”  The OMB AI Procurement Memo establishes requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data and prevent vendor lock-in.  According to the White House’s fact sheet, the OMB Memos, which rescind and replace AI use and procurement memos issued under President Biden’s Executive Order 14110, shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”Continue Reading April 2025 AI Developments Under the Trump Administration

This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

A. Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes.  Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud.  Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.

  • CREATE AI Act:  In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year.  The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies.  The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.

Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.

White House Receives Public Comments on AI Action Plan

On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6.  As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”

The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies.  The final AI Action Plan is expected by July of 2025.

NIST Launches New AI Standards InitiativesContinue Reading March 2025 AI Developments Under the Trump Administration

On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”  The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” 

Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos.  This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.”  At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).Continue Reading OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies

On March 24, the Senate Judiciary Subcommittee on the Constitution held a hearing on the “Censorship Industrial Complex,” where senators and witnesses expressed divergent views on risks to First Amendment rights.  Senator Eric Schmitt (R-MO), the Subcommittee Chair, began the hearing by warning that the “vast censorship enterprise that the Biden Administration built” has expanded into an “alliance of activists, academics, journalists, big tech companies, and federal bureaucrats” that uses “novel tools and technologies of the 21st century” to silence critics.  Senator Peter Welch (D-VT), the Ranking Member of the Subcommittee, expressed skepticism about alleged censorship by the Biden Administration and social media companies, citing the Supreme Court’s 2024 opinion in Murthy v. Missouri, and accused the Trump Administration of causing “real suppression of free speech.”

The witnesses at the hearing, including law professors, journalists, and an attorney from the Reporters Committee for Freedom of the Press, expressed contrasting views on the state of free expression and risks of censorship.  Mollie Hemingway, the Editor-in-Chief of The Federalist, argued that federal and state governments “fund and promote censorship and blacklisting technology” to undermine free speech in coordination with universities, non-profit entities, and technology companies.  Jonathan Turley, a George Washington University law professor, and Benjamin Weingarten, an investigative journalist, raised similar censorship concerns, with Turley arguing that a “cottage industry of disinformation experts” had “monetized censorship” and adding that the EU’s Digital Services Act presents a “new, emerging threat” to First Amendment rights.Continue Reading Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.”  The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047).  The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.

Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation

This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  The first blog summarized key actions taken in the first weeks of the Trump Administration, including the revocation of President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of AI” and the release of President Trump’s Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (“AI EO”).  This blog describes actions on AI taken by the Trump Administration in February 2025.Continue Reading February 2025 AI Developments Under the Trump Administration

State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.Continue Reading State Legislatures Consider New Wave of 2025 AI Legislation

Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips.  Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January.  DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.

The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials.  These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers.  What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security.  Continue Reading U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek