U.S. federal agencies and working groups have promulgated a number of issuances in January 2023 related to the development and use of artificial intelligence (“AI”) systems. These updates join proposals in Congress to pass legislation related to AI. Specifically, in January 2023, the Department of Defense (“DoD”) updated Department of Defense Directive 3000.09 and the National Artificial Intelligence Research Resource (“NAIRR”) Task Force Final Report on AI; the National Institute of Standards and Technology (“NIST”) released its AI Risk Management Framework, each discussed below.
Department of Defense Directive 3000.09.
On January 25, 2023, the DoD updated Directive 3000.09, “Autonomy in Weapon Systems,” which governs the development and fielding of autonomous and semi-autonomous weapons systems, including those systems that incorporate AI technologies. The Directive has three primary purposes: (1) establishing a policy and assigning responsibilities for the development and use of autonomous and semi-autonomous functions in weapons systems; (2) establishing guidelines designed to minimize the probability and consequences of failures in such systems; and (3) establishing the “Autonomous Weapon Systems Working Group.” For example, the Directive provides that autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators “to exercise appropriate levels of human judgment” over the use of force, and that these systems must be subject to verification and validation testing to build confidence in the weapon system’s operation. The Directive also underscores that design and development of AI capabilities in autonomous and semi-autonomous weapons systems must be consistent with the DoD’s AI Ethical Principles – specifically, that the AI is: (1) responsible; (2) equitable; (3) traceable; (4) reliable; and (5) governable. The Directive outlines a number of roles and responsibilities regarding oversight for autonomous and semi-autonomous weapon systems and provides guidance as to when senior review and approval are required to use these types of systems. Directive 3000.09 and the DoD’s AI Ethical Principles will be important for entities working with, and providing AI-enabled tools and services for the DoD.
NAIRR Task Force Report
In the National AI Initiative Act of 2020, Congress directed the National Science Foundation and the White House Office of Science and Technology Policy to establish a task force to develop options for providing researchers and students with access and resources for AI research and development. As part of these efforts, Congress directed these organizations to create a roadmap for a National Artificial Intelligence Research Resource (“NAIRR”). On January 24, 2023, the NAIRR Task Force released its final report that presents a roadmap and implementation plan for a national cyberinfrastructure aimed at maximizing the development of AI and using the benefits of this technology in society. The report’s key recommendations include:
- Establishing NAIRR with four measurable goals: (1) to spur innovation, (2) to increase diversity of talent, (3) to improve capacity, and (4) to advance trustworthy AI.
- Implementing NAIRR over four phases: (1) program launch and operating entity selection, (2) operating entity startup, (3) NAIRR initial operation capability, and (4) NAIRR ongoing operations. As contemplated, NAIRR would be operational “no later than 21 months” from launch of the program and fully implemented in year 3 of the program. The Report’s implementation plan proposes a pilot program to make AI research resources available to AI R&D communities while implementation ensues.
- Requiring $2.6 billion in funding for NAIRR over a six-year period to meet the national need for resources to fuel AI innovation.
- Ensuring that NAIRR is “broadly accessible” to a wide range of users—lowering barriers to participation in AI research and increasing the diversity of AI researchers. Access would be provided via an integrated portal and must include computational resources—both conventional servers and cloud computing, data resources, and testing tools.
NIST AI Risk Management Framework
As covered in our prior blog posts here and here, on January 26, 2023, the U.S. Department of Commerce’s NIST released its Artificial Intelligence Risk Management Framework (“RMF”) guidance document, together with a companion AI RMF Playbook that suggests ways to navigate and use the Framework. The RMF provides a voluntary set of principles and process for organizations to follow to identify and minimize risks in the design and use of AI systems. Governance processes around the use of AI, including policies, processes, and diverse teams to advise on AI development and use are of particular importance to the RMF. Additionally, the RMF suggests that organizations should evaluate the risks presented by an AI system, taking into account the context of use, and consider how best to mitigate these risks. We will continue to monitor these and other AI related developments across our blogs.