The Biden administration’s October 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”) sets out an extensive list of deadlines for the various federal agencies tasked with implementing the Order’s requirements. 

We previously summarized the Order, compared its requirements with those of the EU’s AI Act, and identified initial implementation steps.  This post highlights the Order’s key actions with implementation deadlines during 1Q24. 

By Late January 2024:  The Order requires the following key actions to be completed within 90 days of the Order’s late October 2023 issuance, so before the end of this month.

  • The Secretary of Commerce is expected to issue rules requiring companies to make disclosures to the government when they develop or intend to develop certain large and highly capable AI models.  The Order requires that such disclosures include information about (a) the nature and security of development activities; (b) the developer’s control of model weights, which are key data that enable a model to function; and (c) how the model performs in red-team testing based on not-yet-published National Institute of Standards and Technology guidelines.  The requirements apply only to “dual-use foundation models” that meet certain technical requirements that can be modified by the Secretary of Commerce. 
  • The Secretary of Commerce is expected to issue rules requiring companies to make disclosures to the government when they “acquire, develop, or possess a potential large-scale computing cluster.”  Large-scale computing clusters are those that have “a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and . . . a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI,” though this technical threshold can be modified by the Secretary of Commerce. 
  • The Secretary of Commerce is expected to propose regulations requiring U.S. Infrastructure as a Service (IaaS) providers to report certain information about foreign customers.  Among other things, the proposed regulations must require that U.S IaaS providers report to the Secretary when “a foreign person transacts with that [U.S.] IaaS [p]rovider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.”  
  • The Attorney General is expected to hold a meeting with federal civil rights officials to discuss ways to prevent AI-related discrimination.  Attendees are to include the heads of federal civil rights offices.
  • The Secretary of Health and Human Services (“HHS”) is expected to establish an HHS AI Task Force.  The Task Force will be responsible for implementing requirements of the Order, including by developing a strategic plan for “responsible deployment and use of AI and AI-enabled technologies in the health and human services sector.”
  • Agency heads with regulatory authority over critical infrastructure are expected to provide the Secretary of Homeland Security with an assessment of AI-related risks to critical infrastructure.  Under the Order, agencies are expected to repeat this assessment on at least an annual basis.

By Later This QuarterThe Order also contemplates actions that are expected to take place later this quarter.

  • The Director of the Office of Management and Budget (OMB) is expected to release by late March finalized guidance on responsible adoption of AI technologies by federal agencies. OMB released a draft memorandum implementing this requirement in November, with the promise that finalized guidance would follow.  Among other things, the guidance will direct agencies to (a) designate a Chief AI Officer and implement coordination mechanisms for AI-related activities, (b) develop a strategy for responsibly integrating AI into agency activities, and (c) establish safeguards to ensure that AI is implemented securely and in a way that provides transparency to the public.  The draft applies additional requirements to “rights-impacting AI,” which it defines as “AI whose output serves as a basis for decision or action that has legal, material, or similarly significant effect” on certain specified interests (e.g., civil rights, privacy, access to resources or services) of an individual or community.
  • The Secretary of the Treasury is expected to issue a report by late March providing financial institutions with best practices for managing AI-related risks.  The report will focus on cybersecurity threats related to AI.