Photo of August Gweon

August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect recent trends in state AI

Continue Reading California Lawmakers Advance Suite of AI Bills

This update highlights key mid-year legislative and regulatory developments and builds on our first quarter update related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), Internet of Things (“IoT”), and cryptocurrencies and blockchain developments.

I. Federal AI Legislative Developments

    In the first session of the 119th Congress, lawmakers rejected a proposed moratorium on state and local enforcement of AI laws and advanced several AI legislative proposals focused on deepfake-related harms.  Specifically, on July 1, after weeks of negotiations, the Senate voted 99-1 to strike a proposed 10-year moratorium on state and local enforcement of AI laws from the budget reconciliation package, the One Big Beautiful Bill Act (H.R. 1), which President Trump signed into law.  The vote to strike the moratorium follows the collapse of an agreement on revised language that would have shortened the moratorium to 5 years and allowed states to enforce “generally applicable laws,” including child online safety, digital replica, and CSAM laws, that do not have an “undue or disproportionate effect” on AI.  Congress could technically still consider the moratorium during this session, but the chances of that happening are low based on both the political atmosphere and the lack of a must-pass legislative vehicle in which it could be included.  See our blog post on this topic for more information.

    Additionally, lawmakers continue to focus legislation on deepfakes and intimate imagery.  For example, on May 19, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (H.R. 633 / S. 146) into law, which requires online platforms to establish a notice and takedown process for nonconsensual intimate visual depictions, including certain depictions created using AI.  See our blog post on this topic for more information.  Meanwhile, members of Congress continued to pursue additional legislation to address deepfake-related harms, such as the STOP CSAM Act of 2025 (S. 1829 / H.R. 3921) and the Disrupt Explicit Forged Images And Non-Consensual Edits (“DEFIANCE”) Act (H.R. 3562 / S. 1837).

    Continue Reading U.S. Tech Legislative & Regulatory Update – 2025 Mid-Year Update

    On July 23, the White House released its AI Action Plan, outlining the key priorities of the Trump Administration’s AI policy agenda.  In parallel, President Trump signed three AI executive orders directing the Executive Branch to implement the AI Action Plan’s policies on “Preventing Woke AI in the Federal Government,” “Accelerating Federal Permitting of

    Continue Reading Trump Administration Issues AI Action Plan and Series of AI Executive Orders

    On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado AI Act.  Unlike the

    Continue Reading Texas Enacts AI Consumer Protection Law

    On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies

    Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation

    On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul (D), the RAISE Act would

    Continue Reading New York Legislature Passes Sweeping AI Safety Legislation

    This year, state lawmakers have introduced over a dozen bills to regulate “surveillance,” “personalized,” or “dynamic” pricing.  Although many of these proposals have failed as 2025 state legislative sessions come to a close, lawmakers in New York, California, and a handful of other states are moving forward with a range of different approaches.  These proposals

    Continue Reading State Legislatures Advance Surveillance Pricing Regulations

    House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems.  The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new state AI laws and the

    Continue Reading House Republicans Push for 10-Year Moratorium on State AI Laws

    This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in April 2025, and prior articles in this series are available here.

    White House OMB Issues AI Use & Procurement Requirements for Federal Agencies

    On April 3, the White House Office of Management & Budget (“OMB”) issued two memoranda on the use and procurement of AI by federal agencies: Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”).  The two memos partially implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence,” which, among other things, directs OMB to revise the Biden OMB AI Memos to align with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”  The OMB AI Use Memo outlines agency governance and risk management requirements for the use of AI, including AI use case inventories and generative AI policies, and establishes “minimum risk management practices” for “high-impact AI use cases.”  The OMB AI Procurement Memo establishes requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data and prevent vendor lock-in.  According to the White House’s fact sheet, the OMB Memos, which rescind and replace AI use and procurement memos issued under President Biden’s Executive Order 14110, shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”

    Continue Reading April 2025 AI Developments Under the Trump Administration

    This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

    I. Artificial Intelligence

    A. Federal Legislative Developments

    In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes.  Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud.  Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.

    • CREATE AI Act:  In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year.  The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies.  The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.
    Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025