On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.

The OMB guidance—Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence— defines AI broadly to include machine learning and “[a]ny artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets” among other things.  It directs federal agencies and departments to address risks from the use of AI, expand public transparency, advance responsible AI innovation, grow an AI-focused talent pool and workforce, and strengthen AI governance systems.  Federal agencies must implement proscribed safeguard practices no later than December 1, 2024.

More specifically, the guidance includes a number of requirements for federal agencies, including:

  • Expanded Governance:  The guidance requires agencies to designate Chief AI Officers responsible for coordinating agency use of AI, promoting AI adoption, and managing risk, within 60 days.  It also requires each agency to convene an AI governance body within 60 days.  Within 180 days, agencies must submit to OMB and release publicly an agency plan to achieve consistency with OMB’s guidance.
  • Inventories:  Each agency (except the Department of Defense and those that comprise the intelligence community) is required to inventory its AI use cases at least annually and submit a report to OMB.  Some use cases are exempt from being reported individually, but agencies still must report aggregate metrics about those use cases to OMB if otherwise in scope.  The guidance states that OMB will later issue “detailed instructions” for these reports.
  • Removing Barriers to Use of AI:  The guidance focuses on removing barriers to the responsible use of AI, including by ensuring that adequate infrastructure exists for AI projects and that agencies have sufficient capacity to manage data used for training, testing, and operating AI.  As part of this, the guidance states that agencies “must proactively share their custom-developed code — including models and model weights — for AI application in active use and must release and maintain that code as open-source software on a public repository,” subject to some exceptions (e.g., if sharing is restricted by law or required by a contractual obligation). 
  • Special Requirements for FedRAMP.  The guidance calls for updates to the Federal Risk and Authorization Management Program (FedRAMP), which generally applies to cloud services that are sold to the U.S. Government.  Specifically, the guidance requires agencies to make updates to authorization processes for FedRAMP services, including by advancing continuous authorizations (different from annual authorizations) for services with AI.  The guidance also encourages agencies to prioritize critical and emerging technologies and generative AI in issuing Authorizations to Operate (ATOs). 
  • Risk Management:  For certain “safety-impacting” and “rights-impacting” AI use cases, some agencies will need to adopt minimum risk management practices.  These include the completion of an AI impact assessment that examines, for example, the intended purpose for AI, expected benefits, and potential risks.  The minimum practices also require the agency to test AI for performance in a real-world context and conduct ongoing monitoring of the system.  Among other requirements, the agency will be responsible for identifying and assessing AI’s impact on equity and fairness and taking steps to mitigate algorithmic discrimination when present.  The guidance presents these practices as initial baseline tasks and requires agencies to identify additional context-specific risks for relevant use cases to be addressed by applying best practices for AI risk-management, such as those from the National Institute of Standards and Technology (NIST) AI Management Framework.  The guidance also calls for human oversight of safety- and rights-impacting AI decision making and remedy processes for affected individuals.  Agencies must implement these minimum practices no later than December 1, 2024.

Separately but relatedly, OMB issued an RFI on March 29, 2024, to inform future action governing the responsible procurement of AI  under federal contracts. The RFI seeks responses to several questions designed to provide OMB with information to enable it and/or federal agencies to craft contract language and requirements that will further agency AI use and innovation while managing its risks and performance.  Responses to these questions, as well as any other comments on the subject, are due by April 29, 2024.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Yaron Dori Yaron Dori

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the…

Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee.

Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance.

Early in his career, Yaron advised telecommunications companies and investors on regulatory policy and frameworks that led to the development of broadband networks. When those networks became bidirectional and enabled companies to collect consumer data, he advised those companies on their data privacy and consumer protection obligations. Today, as new technologies such as Artificial Intelligence (AI) are being used to enhance the applications and services offered by such companies, he advises them on associated legal and regulatory obligations and risks. It is this varied background – which tracks the evolution of the technology industry – that enables Yaron to provide clients with a holistic, 360-degree view of technology policy, regulation, compliance, and enforcement.

Yaron represents clients before federal regulatory agencies—including the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Commerce (DOC)—and the U.S. Congress in connection with a range of issues under the Communications Act, the Federal Trade Commission Act, and similar statutes. He also represents clients on state regulatory and enforcement matters, including those that pertain to telecommunications, data privacy, and consumer protection regulation. His deep experience in each of these areas enables him to advise clients on a wide range of technology regulations and key business issues in which these areas intersect.

With respect to technology and telecommunications matters, Yaron advises clients on a broad range of business, policy and consumer-facing issues, including:

  • Artificial Intelligence and the Internet of Things;
  • Broadband deployment and regulation;
  • IP-enabled applications, services and content;
  • Section 230 and digital safety considerations;
  • Equipment and device authorization procedures;
  • The Communications Assistance for Law Enforcement Act (CALEA);
  • Customer Proprietary Network Information (CPNI) requirements;
  • The Cable Privacy Act
  • Net Neutrality; and
  • Local competition, universal service, and intercarrier compensation.

Yaron also has extensive experience in structuring transactions and securing regulatory approvals at both the federal and state levels for mergers, asset acquisitions and similar transactions involving large and small FCC and state communication licensees.

With respect to privacy and consumer protection matters, Yaron advises clients on a range of business, strategic, policy and compliance issues, including those that pertain to:

  • The FTC Act and related agency guidance and regulations;
  • State privacy laws, such as the California Consumer Privacy Act (CCPA) and California Privacy Rights Act, the Colorado Privacy Act, the Connecticut Data Privacy Act, the Virginia Consumer Data Protection Act, and the Utah Consumer Privacy Act;
  • The Electronic Communications Privacy Act (ECPA);
  • Location-based services that use WiFi, beacons or similar technologies;
  • Digital advertising practices, including native advertising and endorsements and testimonials; and
  • The application of federal and state telemarketing, commercial fax, and other consumer protection laws, such as the Telephone Consumer Protection Act (TCPA), to voice, text, and video transmissions.

Yaron also has experience advising companies on congressional, FCC, FTC and state attorney general investigations into various consumer protection and communications matters, including those pertaining to social media influencers, digital disclosures, product discontinuance, and advertising claims.

Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the proposed Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette advises defense and civilian contractors on federal contracting compliance and on civil and internal investigations that stem from these obligations. Ryan has particular experience with clients that hold defense and intelligence community contracts and subcontracts, and has recognized expertise in national…

Ryan Burnette advises defense and civilian contractors on federal contracting compliance and on civil and internal investigations that stem from these obligations. Ryan has particular experience with clients that hold defense and intelligence community contracts and subcontracts, and has recognized expertise in national security related matters, including those matters that relate to federal cybersecurity and federal supply chain security. Ryan also advises on government cost accounting, FAR and DFARS compliance, public policy matters, and agency disputes. He speaks and writes regularly on government contracts and cybersecurity topics, drawing significantly on his prior experience in government to provide insight on the practical implications of regulations.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.