On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (the “Framework”) guidance document, alongside a companion AI RMF Playbook that suggests ways to navigate and use the Framework.  The goal of the Framework is to provide a resource to organizations “designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.”  NIST aims for the Framework to offer a practical resource that can be adapted as the AI technologies continue to develop.  The release of the Framework follows the release of previous drafts and opportunities for public comment.  An initial draft of the Framework was released in March 2022 and a second draft was released in August 2022, prior to the official launch of version 1.0 of the Framework (NIST AI 100-1).

Below, we briefly summarize the Framework’s two parts: (1) Foundational Information and (2) Core and Profiles.

Part I: Foundational Information.  Part I discusses how organization can frame AI-related risks and outlines the characteristics of trustworthy AI systems.  There are four sections:

  • Framing Risk.  The Framework offers approaches to understand and address risks in AI systems to “unleash potential benefits” to people, organizations, and systems.  The Framework defines risk as the “composite measure of (1) an event’s probability of occurring and (2) the magnitude or degree of the consequences of the corresponding event.”  The Framework recognizes that appropriately quantitatively or qualitatively measuring the risk involved with an AI system can be difficult in practice.  For example, the Framework provides as an example that measuring risk at an earlier stage in the AI lifecycle may yield different results than measuring risk at a later stage.  Additionally, the Framework clarifies that it is intended to provide a process to prioritize and address risk, but does not prescribe risk tolerance (i.e., an organization’s or AI actor’s “readiness to bear the risk in order to achieve its objectives”) for an organization’s approach to AI systems.  As part of the framing risk stage, the Framework also recognizes that “not all AI risks are the same” and that attempting to entirely eliminate risk can be “counterproductive” because not all failures can be addressed, and the purpose of the Framework is to provide a means to prioritize risk areas an organization will want to target.
  • Audience.  The Framework notes, however, that successful risk management depends upon a sense of collective responsibility and requires diverse perspectives, disciplines, professions, and experiences.
  • AI Risks and Trustworthiness.  The Framework reflects that, for AI systems to be trustworthy, they must incorporate criteria that are of value to interested parties.  Characteristics of trustworthy AI systems outlined by the Framework include: (1) valid and reliable, (2) safe, (3) secure and resilient, (4) accountable and transparent, (5) explainable and interpretable, (6) privacy-enhanced, and (7) fair with harmful bias managed.  The Framework describes the “valid and reliable” characteristic as a “necessary . . . characteristic,” but the other characteristics can be balanced.  The Framework underscores that creating a trustworthy AI system requires balancing each of these characteristics on the AI system’s context of use.
  • Effective.  NIST, in conjunction with the AI community, will evaluate the Framework to measure effectiveness and will suggest further updates.

Part II: Core and Profiles.  Part II of the Framework describes four specific functions that, together, comprise the “Core” of managing AI risks – governance, mapping, measuring, and management. Part II also provides an overview of Profiles that provide examples of how certain types of organizations manage AI risk.  Organizations can use these functions to manage AI system risks.  These four functions, and the manner in which organizations can best employ them to manage AI risks, are summarized below.

  • Govern.  NIST comments that govern is a “cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.”  This function, among other things, cultivates and implements a culture of risk management within organizations and incorporates processes to assess potential impacts.  This function includes activities such as creating accountability structures around AI systems (e.g., policies, processes, and procedures for AI systems; empowering teams responsible for AI systems); promoting workforce diversity, equity, inclusion, and accessibility; and implementing processes for robust engagement with relevant AI actors.
  • Map.  The map function is intended to contextualize and frame risks related to n AI system.  Recognizing that the AI system lifecycle consists of numerous activities and a diverse set of actors, this function includes activities such as identifying the context for the AI system (including intended purposes, categorizing the AI system); capabilities and risks of the AI system; and impacts to individuals, groups, communities and organizations.  The Framework emphasizes that the map function benefits from diverse perspectives and engagement with external actors, which help develop more trustworthy AI systems by improving their capacity for understanding contexts or anticipating risks of non-intended uses of AI systems. 
  • Measure.  The measure function employs quantitative, qualitative, or mixed-method techniques to analyze and monitor AI risk and related impacts.  Measuring assists when tradeoffs among trustworthy characteristics arise by providing a traceable basis to inform management decisions.  The measure function includes evaluating AI systems for trustworthy characteristics (e.g., security and resilience, privacy risk, fairness and bias risk, environmental impact); creating mechanisms to track identified AI risks over time; and gathering feedback about the efficacy of measurement over time. 
  • Manage.  This function involves regularly allocating risk resources to mapped and measured risks to manage AI risk.  The Framework includes activities associated with the manage function, which include prioritizing identified risk; identifying strategies to maximize AI benefits and minimize negative impacts; managing risks stemming from third-party AI systems; and identifying risk treatments, including response and recovery plans.

The Framework’s last section discusses the use of Profiles, which provide illustrative examples of how the Framework can be implemented for a specific setting or application based on the requirements, risk tolerance, and resources of an organization.  As a result, these profiles may assist organizations in deciding how they might best manage AI risk or consider legal and regulatory requirements. 

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.