On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March. The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies
Continue Reading California Frontier AI Working Group Issues Final Report on Frontier Model Regulation
August Gweon
August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.
August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.
New York Legislature Passes Sweeping AI Safety Legislation
On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul (D), the RAISE Act would…
Continue Reading New York Legislature Passes Sweeping AI Safety LegislationState Legislatures Advance Surveillance Pricing Regulations
This year, state lawmakers have introduced over a dozen bills to regulate “surveillance,” “personalized,” or “dynamic” pricing. Although many of these proposals have failed as 2025 state legislative sessions come to a close, lawmakers in New York, California, and a handful of other states are moving forward with a range of different approaches. These proposals…
Continue Reading State Legislatures Advance Surveillance Pricing RegulationsHouse Republicans Push for 10-Year Moratorium on State AI Laws
House Republicans have passed through committee a nationwide, 10-year moratorium on the enforcement of state and local laws and regulations that impose requirements on AI and automated decision systems. The moratorium, which would not apply to laws that promote AI adoption, highlights the widening gap between a wave of new state AI laws and the…
Continue Reading House Republicans Push for 10-Year Moratorium on State AI LawsApril 2025 AI Developments Under the Trump Administration
This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. This blog describes AI actions taken by the Trump Administration in April 2025, and prior articles in this series are available here.
White House OMB Issues AI Use & Procurement Requirements for Federal Agencies
On April 3, the White House Office of Management & Budget (“OMB”) issued two memoranda on the use and procurement of AI by federal agencies: Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”). The two memos partially implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence,” which, among other things, directs OMB to revise the Biden OMB AI Memos to align with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.” The OMB AI Use Memo outlines agency governance and risk management requirements for the use of AI, including AI use case inventories and generative AI policies, and establishes “minimum risk management practices” for “high-impact AI use cases.” The OMB AI Procurement Memo establishes requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data and prevent vendor lock-in. According to the White House’s fact sheet, the OMB Memos, which rescind and replace AI use and procurement memos issued under President Biden’s Executive Order 14110, shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”Continue Reading April 2025 AI Developments Under the Trump Administration
U.S. Tech Legislative & Regulatory Update – First Quarter 2025
This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain.
I. Artificial Intelligence
A. Federal Legislative Developments
In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes. Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud. Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.
- Deepfake Regulation: In February, the Senate passed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (“TAKE IT DOWN”) Act (S. 146), following its unanimous passage by the Senate in 2024. The Act would prohibit the nonconsensual disclosure of AI-generated intimate imagery and require platforms to remove such content published on the platform. The House version of the TAKE IT DOWN Act (H.R. 633) has been referred to the House Energy & Commerce Committee.
- CREATE AI Act: In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year. The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies. The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.
Continue Reading U.S. Tech Legislative & Regulatory Update – First Quarter 2025
March 2025 AI Developments Under the Trump Administration
This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration. This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.
White House Receives Public Comments on AI Action Plan
On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6. As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”
The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies. The final AI Action Plan is expected by July of 2025.
NIST Launches New AI Standards InitiativesContinue Reading March 2025 AI Developments Under the Trump Administration
OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies
On April 3, the White House Office of Management and Budget (“OMB”) released two memoranda with AI guidance and requirements for federal agencies, Memorandum M-25-21 on Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (“OMB AI Use Memo“) and Memorandum M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government (“OMB AI Procurement Memo”). According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos (collectively, the “new OMB AI Memos”), which rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 (“Biden OMB AI Memos”), shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.” The new OMB AI Memos implement President Trump’s January 23 Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence” (the “AI EO”), which directs the OMB to revise the Biden OMB AI Memos to make them consistent with the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”
Overall, the new OMB AI Memos build on the frameworks established under President Trump’s 2020 Executive Order 13960 on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” and the Biden OMB AI Memos. This is consistent with the AI EO, which noted that the Administration would “revise” the Biden AI Memos “as necessary.” At the same time, the new OMB AI Memos include some significant differences from the Biden OMB’s approach in the areas discussed below (as well as other areas).Continue Reading OMB Issues First Trump 2.0-Era Requirements for AI Use and Procurement by Federal Agencies
Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”
On March 24, the Senate Judiciary Subcommittee on the Constitution held a hearing on the “Censorship Industrial Complex,” where senators and witnesses expressed divergent views on risks to First Amendment rights. Senator Eric Schmitt (R-MO), the Subcommittee Chair, began the hearing by warning that the “vast censorship enterprise that the Biden Administration built” has expanded into an “alliance of activists, academics, journalists, big tech companies, and federal bureaucrats” that uses “novel tools and technologies of the 21st century” to silence critics. Senator Peter Welch (D-VT), the Ranking Member of the Subcommittee, expressed skepticism about alleged censorship by the Biden Administration and social media companies, citing the Supreme Court’s 2024 opinion in Murthy v. Missouri, and accused the Trump Administration of causing “real suppression of free speech.”
The witnesses at the hearing, including law professors, journalists, and an attorney from the Reporters Committee for Freedom of the Press, expressed contrasting views on the state of free expression and risks of censorship. Mollie Hemingway, the Editor-in-Chief of The Federalist, argued that federal and state governments “fund and promote censorship and blacklisting technology” to undermine free speech in coordination with universities, non-profit entities, and technology companies. Jonathan Turley, a George Washington University law professor, and Benjamin Weingarten, an investigative journalist, raised similar censorship concerns, with Turley arguing that a “cottage industry of disinformation experts” had “monetized censorship” and adding that the EU’s Digital Services Act presents a “new, emerging threat” to First Amendment rights.Continue Reading Senate Judiciary Subcommittee Holds Hearing on the “Censorship Industrial Complex”
California Frontier AI Working Group Issues Report on Foundation Model Regulation
On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models, with the aim of providing an “evidence-based foundation for AI policy decisions” in California that “ensure[s] these powerful technologies benefit society globally while reasonably managing emerging risks.” The Working Group was established by California Governor Gavin Newsom (D) in September 2024, following his veto of California State Senator Scott Wiener (D-San Francisco)’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047). The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, established by Governor Newsom’s 2023 Executive Order on generative AI.
Noting that “foundation model capabilities have rapidly improved” since the veto of SB 1047 and that California’s “unique opportunity” to shape AI governance “may not remain open indefinitely,” the report assesses transparency, third-party risk assessment, and adverse event reporting requirements as key components for foundation model regulation.Continue Reading California Frontier AI Working Group Issues Report on Foundation Model Regulation