U.S. federal agencies and working groups have promulgated a number of issuances in January 2023 related to the development and use of artificial intelligence (“AI”) systems.  These updates join proposals in Congress to pass legislation related to AI.  Specifically, in January 2023, the Department of Defense (“DoD”) updated Department of Defense Directive 3000.09 and the National Artificial Intelligence Research Resource (“NAIRR”) Task Force Final Report on AI; the National Institute of Standards and Technology (“NIST”) released its AI Risk Management Framework, each discussed below.

Department of Defense Directive 3000.09. 

On January 25, 2023, the DoD updated Directive 3000.09, “Autonomy in Weapon Systems,” which governs the development and fielding of autonomous and semi-autonomous weapons systems, including those systems that incorporate AI technologies.  The Directive has three primary purposes:  (1) establishing a policy and assigning responsibilities for the development and use of autonomous and semi-autonomous functions in weapons systems; (2) establishing guidelines designed to minimize the probability and consequences of failures in such systems; and (3) establishing the “Autonomous Weapon Systems Working Group.”  For example, the Directive provides that autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators “to exercise appropriate levels of human judgment” over the use of force, and that these systems must be subject to verification and validation testing to build confidence in the weapon system’s operation.  The Directive also underscores that design and development of AI capabilities in autonomous and semi-autonomous weapons systems must be consistent with the DoD’s AI Ethical Principles – specifically, that the AI is: (1) responsible; (2) equitable; (3) traceable; (4) reliable; and (5) governable.  The Directive outlines a number of roles and responsibilities regarding oversight for autonomous and semi-autonomous weapon systems and provides guidance as to when senior review and approval are required to use these types of systems.  Directive 3000.09 and the DoD’s AI Ethical Principles will be important for entities working with, and providing AI-enabled tools and services for the DoD.

NAIRR Task Force Report

In the National AI Initiative Act of 2020, Congress directed the National Science Foundation and the White House Office of Science and Technology Policy to establish a task force to develop options for providing researchers and students with access and resources for AI research and development.  As part of these efforts, Congress directed these organizations to create a roadmap for a National Artificial Intelligence Research Resource (“NAIRR”).  On January 24, 2023, the NAIRR Task Force released its final report that presents a roadmap and implementation plan for a national cyberinfrastructure aimed at maximizing the development of AI and using the benefits of this technology in society.  The report’s key recommendations include:

  • Establishing NAIRR with four measurable goals: (1) to spur innovation, (2) to increase diversity of talent, (3) to improve capacity, and (4) to advance trustworthy AI.
  • Implementing NAIRR over four phases: (1) program launch and operating entity selection, (2) operating entity startup, (3) NAIRR initial operation capability, and (4) NAIRR ongoing operations.  As contemplated, NAIRR would be operational “no later than 21 months” from launch of the program and fully implemented in year 3 of the program.  The Report’s implementation plan proposes a pilot program to make AI research resources available to AI R&D communities while implementation ensues.
  • Requiring $2.6 billion in funding for NAIRR over a six-year period to meet the national need for resources to fuel AI innovation.
  • Ensuring that NAIRR is “broadly accessible” to a wide range of users—lowering barriers to participation in AI research and increasing the diversity of AI researchers.  Access would be provided via an integrated portal and must include computational resources—both conventional servers and cloud computing, data resources, and testing tools.

NIST AI Risk Management Framework

As covered in our prior blog posts here and here, on January 26, 2023, the U.S. Department of Commerce’s NIST released its Artificial Intelligence Risk Management Framework (“RMF”) guidance document, together with a companion AI RMF Playbook that suggests ways to navigate and use the Framework.  The RMF provides a voluntary set of principles and process for organizations to follow to identify and minimize risks in the design and use of AI systems.  Governance processes around the use of AI, including policies, processes, and diverse teams to advise on AI development and use are of particular importance to the RMF.  Additionally, the RMF suggests that organizations should evaluate the risks presented by an AI system, taking into account the context of use, and consider how best to mitigate these risks. We will continue to monitor these and other AI related developments across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob is ranked by Chambers USA for his work in government contracts and he writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Stephanie Barna Stephanie Barna

Stephanie Barna draws on over three decades of U.S. military and government service to provide advisory and advocacy support and counseling to clients facing policy and political challenges in the aerospace and defense sectors.

Prior to joining the firm, Stephanie was a senior…

Stephanie Barna draws on over three decades of U.S. military and government service to provide advisory and advocacy support and counseling to clients facing policy and political challenges in the aerospace and defense sectors.

Prior to joining the firm, Stephanie was a senior leader on Capitol Hill and in the U.S. Department of Defense (DoD). Most recently, she was General Counsel of the Senate Armed Services Committee, where she was responsible for the annual $740 billion National Defense Authorization Act (NDAA). Additionally, she managed the Senate confirmation of three- and four-star military officers and civilians nominated by the President for appointment to senior political positions in DoD and the Department of Energy’s national security nuclear enterprise, and was the Committee’s lead for investigations.

Previously, as a senior executive in the Office of the Army General Counsel, Stephanie served as a legal advisor to three Army Secretaries. In 2014, Secretary of Defense Chuck Hagel appointed her to be the Principal Deputy Assistant Secretary of Defense for Manpower and Reserve Affairs. In that role, she was a principal advisor to the Secretary of Defense on all matters relating to civilian and military personnel, reserve integration, military community and family policy, and Total Force manpower and resources. Stephanie was later appointed by Secretary of Defense Jim Mattis to perform the duties of the Under Secretary of Defense for Personnel and Readiness, responsible for programs and funding of more than $35 billion.

Stephanie was also previously the Deputy General Counsel for Operations and Personnel in the Office of the Army General Counsel. She led a team of senior lawyers in resolving the full spectrum of issues arising from Army wartime operations and the life cycle of Army military and civilian personnel. Stephanie was also a personal advisor to the Army Secretary on his institutional reorganization and business transformation initiatives and acted for the Secretary in investigating irregularities in fielding of the Multiple Launch Rocket System and classified contracts. She also played a key role in a number of high-profile personnel investigations, including the WikiLeaks breach. Prior to her appointment as Deputy, she was Associate Deputy General Counsel (Operations and Personnel) and Acting Deputy General Counsel.

Stephanie is a retired Colonel in the U.S. Army and served in the U.S. Army Judge Advocate General’s Corps as an Assistant to the General Counsel, Office of the Army General Counsel; Deputy Staff Judge Advocate, U.S. Army Special Forces Command (Airborne); Special Assistant to the Assistant Secretary of the Army (Manpower & Reserve Affairs); and General Law Attorney, Administrative Law Division.

Stephanie was selected by the National Academy of Public Administration for inclusion in its 2022 Class of Academy Fellows, in recognition of her years of public administration service and expertise.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.