On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (the “Framework”) guidance document, alongside a companion AI RMF Playbook that suggests ways to navigate and use the Framework. The goal of the Framework is to provide a resource to organizations “designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” NIST aims for the Framework to offer a practical resource that can be adapted as the AI technologies continue to develop. The release of the Framework follows the release of previous drafts and opportunities for public comment. An initial draft of the Framework was released in March 2022 and a second draft was released in August 2022, prior to the official launch of version 1.0 of the Framework (NIST AI 100-1).
Below, we briefly summarize the Framework’s two parts: (1) Foundational Information and (2) Core and Profiles.
Part I: Foundational Information. Part I discusses how organization can frame AI-related risks and outlines the characteristics of trustworthy AI systems. There are four sections:
- Framing Risk. The Framework offers approaches to understand and address risks in AI systems to “unleash potential benefits” to people, organizations, and systems. The Framework defines risk as the “composite measure of (1) an event’s probability of occurring and (2) the magnitude or degree of the consequences of the corresponding event.” The Framework recognizes that appropriately quantitatively or qualitatively measuring the risk involved with an AI system can be difficult in practice. For example, the Framework provides as an example that measuring risk at an earlier stage in the AI lifecycle may yield different results than measuring risk at a later stage. Additionally, the Framework clarifies that it is intended to provide a process to prioritize and address risk, but does not prescribe risk tolerance (i.e., an organization’s or AI actor’s “readiness to bear the risk in order to achieve its objectives”) for an organization’s approach to AI systems. As part of the framing risk stage, the Framework also recognizes that “not all AI risks are the same” and that attempting to entirely eliminate risk can be “counterproductive” because not all failures can be addressed, and the purpose of the Framework is to provide a means to prioritize risk areas an organization will want to target.
- Audience. The Framework notes, however, that successful risk management depends upon a sense of collective responsibility and requires diverse perspectives, disciplines, professions, and experiences.
- AI Risks and Trustworthiness. The Framework reflects that, for AI systems to be trustworthy, they must incorporate criteria that are of value to interested parties. Characteristics of trustworthy AI systems outlined by the Framework include: (1) valid and reliable, (2) safe, (3) secure and resilient, (4) accountable and transparent, (5) explainable and interpretable, (6) privacy-enhanced, and (7) fair with harmful bias managed. The Framework describes the “valid and reliable” characteristic as a “necessary . . . characteristic,” but the other characteristics can be balanced. The Framework underscores that creating a trustworthy AI system requires balancing each of these characteristics on the AI system’s context of use.
- Effective. NIST, in conjunction with the AI community, will evaluate the Framework to measure effectiveness and will suggest further updates.
Part II: Core and Profiles. Part II of the Framework describes four specific functions that, together, comprise the “Core” of managing AI risks – governance, mapping, measuring, and management. Part II also provides an overview of Profiles that provide examples of how certain types of organizations manage AI risk. Organizations can use these functions to manage AI system risks. These four functions, and the manner in which organizations can best employ them to manage AI risks, are summarized below.
- Govern. NIST comments that govern is a “cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.” This function, among other things, cultivates and implements a culture of risk management within organizations and incorporates processes to assess potential impacts. This function includes activities such as creating accountability structures around AI systems (e.g., policies, processes, and procedures for AI systems; empowering teams responsible for AI systems); promoting workforce diversity, equity, inclusion, and accessibility; and implementing processes for robust engagement with relevant AI actors.
- Map. The map function is intended to contextualize and frame risks related to n AI system. Recognizing that the AI system lifecycle consists of numerous activities and a diverse set of actors, this function includes activities such as identifying the context for the AI system (including intended purposes, categorizing the AI system); capabilities and risks of the AI system; and impacts to individuals, groups, communities and organizations. The Framework emphasizes that the map function benefits from diverse perspectives and engagement with external actors, which help develop more trustworthy AI systems by improving their capacity for understanding contexts or anticipating risks of non-intended uses of AI systems.
- Measure. The measure function employs quantitative, qualitative, or mixed-method techniques to analyze and monitor AI risk and related impacts. Measuring assists when tradeoffs among trustworthy characteristics arise by providing a traceable basis to inform management decisions. The measure function includes evaluating AI systems for trustworthy characteristics (e.g., security and resilience, privacy risk, fairness and bias risk, environmental impact); creating mechanisms to track identified AI risks over time; and gathering feedback about the efficacy of measurement over time.
- Manage. This function involves regularly allocating risk resources to mapped and measured risks to manage AI risk. The Framework includes activities associated with the manage function, which include prioritizing identified risk; identifying strategies to maximize AI benefits and minimize negative impacts; managing risks stemming from third-party AI systems; and identifying risk treatments, including response and recovery plans.
The Framework’s last section discusses the use of Profiles, which provide illustrative examples of how the Framework can be implemented for a specific setting or application based on the requirements, risk tolerance, and resources of an organization. As a result, these profiles may assist organizations in deciding how they might best manage AI risk or consider legal and regulatory requirements.