On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI. This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).* Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis. This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices
Artificial Intelligence (AI)
U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek
Last month, DeepSeek, an AI start-up based in China, grabbed headlines with claims that its latest large language AI model, DeepSeek-R1, could perform on par with more expensive and market-leading AI models despite allegedly requiring less than $6 million dollars’ worth of computing power from older and less-powerful chips. Although some industry observers have raised doubts about the validity of DeepSeek’s claims, its AI model and AI-powered application piqued the curiosity of many, leading the DeepSeek application to become the most downloaded in the United States in late January. DeepSeek was founded in July 2023 and is owned by High-Flyer, a hedge fund based in Hangzhou, Zhejiang.
The explosive popularity of DeepSeek coupled with its Chinese ownership has unsurprisingly raised data security concerns from U.S. Federal and State officials. These concerns echo many of the same considerations that led to a FAR rule that prohibits telecommunications equipment and services from Huawei and certain other Chinese manufacturers. What is remarkable here is the pace at which officials at different levels of government—including the White House, Congress, federal agencies, and state governments, have taken action in response to DeepSeek and its perceived risks to national security. Continue Reading U.S. Federal and State Governments Moving Quickly to Restrict Use of DeepSeek
January 2025 AI Developments – Transitioning to the Trump Administration
This is the first in a new series of Covington blogs on the AI policies, executive orders, and other actions of the new Trump Administration. This blog describes key actions on AI taken by the Trump Administration in January 2025.
Outgoing President Biden Issues Executive Order and Data Center Guidance for AI Infrastructure
Before turning to the Trump Administration, we note one key AI development from the final weeks of the Biden Administration. On January 14, in one of his final acts in office, President Biden issued Executive Order 14141 on “Advancing United States Leadership in AI Infrastructure.” This EO, which remains in force, sets out requirements and deadlines for the construction and operation of “frontier AI infrastructure,” including data centers and clean energy facilities, by private-sector entities on federal land. Specifically, EO 14141 directs the Departments of Defense (“DOD”) and Energy (“DOE”) to lease federal lands for the construction and operation of AI data centers and clean energy facilities by the end of 2027, establishes solicitation and lease application processes for private sector applicants, directs federal agencies to take various steps to streamline and consolidate environmental permitting for AI infrastructure, and directs the DOE to take steps to update the U.S. electricity grid to meet the growing energy demands of AI. Continue Reading January 2025 AI Developments – Transitioning to the Trump Administration
Trump Administration Seeks Public Comment on AI Action Plan
On February 6, the White House Office of Science & Technology Policy (“OSTP”) and National Science Foundation (“NSF”) issued a Request for Information (“RFI”) seeking public input on the “Development of an Artificial Intelligence Action Plan.” The RFI marks a first step toward the implementation of the Trump Administration’s January 23 Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (the “EO”). Specifically, the EO directs Assistant to the President for Science & Technology (and OSTP Director nominee) Michael Kratsios, White House AI & Crypto Czar David Sacks, and National Security Advisor Michael Waltz to “develop and submit to the President an action plan” to achieve the EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance” to “promote human flourishing, economic competitiveness, and national security.” Continue Reading Trump Administration Seeks Public Comment on AI Action Plan
AI Accessibility Software Provider Settles FTC Allegations
On January 3, 2025, the Federal Trade Commission (“FTC”) announced that it reached a settlement with accessiBe, a provider of AI-powered web accessibility software, to resolve allegations that the company violated Section 5 of the FTC Act concerning the marketing and stated efficacy of its software. Continue Reading AI Accessibility Software Provider Settles FTC Allegations
State Attorneys General Issue Guidance On Privacy & Artificial Intelligence
In a new post on the Inside Privacy blog, our colleagues discuss recent guidance from the attorneys general in Oregon and Connecticut interpreting their authority under their state comprehensive privacy statutes and related authorities. Specifically, the Oregon Attorney General’s guidance focuses on laws relevant for artificial intelligence (“AI”), and the Connecticut Attorney General’s guidance…
Continue Reading State Attorneys General Issue Guidance On Privacy & Artificial IntelligenceUK Government Proposes Copyright & AI Reform
In case you missed it before the holidays: on 17 December 2024, the UK Government published a consultation on “Copyright and Artificial Intelligence” in which it examines proposals to change the UK’s copyright framework in light of the growth of the artificial intelligence (“AI”) sector.
The Government sets out the following core objectives for a new copyright and AI framework:
- Support right holders’ control of their content and, specifically, their ability to be remunerated when AI developers use that content, such as via licensing regimes;
- Support the development of world-leading AI models in the UK, including by facilitating AI developers’ ability to access and use large volumes of online content to train their models; and
- Promote greater trust between the creative and AI sectors (and among consumers) by introducing transparency requirements on AI developers about the works they are using to train AI models, and potentially requiring AI-generated outputs to be labelled.
In this post, we consider some of the most noteworthy aspects of the Government’s proposal.Continue Reading UK Government Proposes Copyright & AI Reform
ICO Audit on AI Recruitment Tools
On November 6, 2024, the UK Information Commissioner’s Office (ICO) released its AI Tools in recruitment audit outcomes report (“Report”). This Report documents the ICO’s findings from a series of consensual audit engagements conducted with AI tool developers and providers. The goal of this process was to assess compliance with data protection law, identify any risks or room for improvement, and provide recommendations for AI providers and recruiters. The audits ran across sourcing, screening, and selection processes in recruitment, but did not include AI tools used to process biometric data, or generative AI. This work follows the publication of the Responsible AI in Recruitment guide by the Department for Science, Innovation, and Technology (DSIT) in March 2024.Continue Reading ICO Audit on AI Recruitment Tools
U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States
The results of the 2024 U.S. election are expected to have significant implications for AI legislation and regulation at both the federal and state level.
Like the first Trump Administration, the second Trump Administration is likely to prioritize AI innovation, R&D, national security uses of AI, and U.S. private sector investment and leadership in AI. Although recent AI model testing and reporting requirements established by the Biden Administration may be halted or revoked, efforts to promote private-sector innovation and competition with China are expected to continue. And while antitrust enforcement involving large technology companies may continue in the Trump Administration, more prescriptive AI rulemaking efforts such as those launched by the current leadership of the Federal Trade Commission (“FTC”) are likely to be curtailed substantially.
In the House and Senate, Republican majorities are likely to adopt priorities similar to those of the Trump Administration, with a continued focus on AI-generated deepfakes and prohibitions on the use of AI for government surveillance and content moderation.
At the state level, legislatures in California, Texas, Colorado, Connecticut, and others likely will advance AI legislation on issues ranging from algorithmic discrimination to digital replicas and generative AI watermarking.
This post covers the effects of the recent U.S. election on these areas and what to expect as we enter 2025. (Click here for our summary of the 2024 election implications on AI-related industrial policy and competition with China.)Continue Reading U.S. AI Policy Expectations in the Trump Administration, GOP Congress, and the States
FTC Settles Case Against Provider of AI-Enabled Security Systems
On Tuesday, November 26, the FTC released a proposed settlement order with Evolv Technologies, a provider of AI-enabled security screening systems. The FTC’s complaint in the matter alleged that Evolv violated Section 5 of the FTC Act by making “false or unsupported claims” about the capabilities of an AI-enabled screening system that it provides to schools and other venues. Specifically, the complaint asserts that Evolv misrepresented “the extent to which the system will detect weapons and ignore harmless items” more accurately and cost-effectively than traditional metal detectors.
The FTC positioned its action against Evolv as a continuation of its work under the previously announced “Operation AI Comply,” which we discussed here, to “ensure that AI marketing is truthful.” The complaint alleges that Evolv made “a very deliberate choice” to market its screening system as involving the use of AI, but that Evolv’s effort to position the screening system as a high-tech “weapons detection” system rather than a metal detector “is solely a marketing distinction, in that the only things that [the screening system’s] scanners detect are metallic, and its alarms can be set off by metallic objects that are not weapons.” Continue Reading FTC Settles Case Against Provider of AI-Enabled Security Systems