This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 4 minute read

Implementing the NIST Artificial Intelligence Risk Management Framework – Manage

The National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework (NIST AI 100-1) in January 2023. 

The NIST AI Framework consists of 19 categories and 72 subcategories within the following four core functions:

  1. Govern
  2. Map
  3. Measure
  4. Manage

In prior articles, we focused on considerations when assessing and implementing the Govern, Map, and Measure functions within the NIST AI Risk Management Framework. In this article, we focus on the implementation of the Manage function of the NIST AI Risk Management Framework.

The Manage function includes four categories and 13 subcategory controls as listed in Table 1 below.

Table 1

CategorySubcategory
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.MANAGE 1.1: A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed. 
MANAGE 1.2: Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.
MANAGE 1.3: Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting. 
MANAGE 1.4: Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.
MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.MANAGE 2.1: Resources required to manage AI risks are taken into account – along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts. 
MANAGE 2.2: Mechanisms are in place and applied to sustain the value of deployed AI systems.
MANAGE 2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified. 
MANAGE 2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.
MANAGE 3: AI risks and benefits from third-party entities are managed.MANAGE 3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented. 
MANAGE 3.2: Pre-trained models that are used for development are monitored as part of AI system regular monitoring and maintenance.
MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.MANAGE 4.1: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management. 
MANAGE 4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.
MANAGE 4.3: Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How can organizations use the NIST AI Risk Management Framework Controls to assess activities that involve AI systems for the Manage function?

Along with the NIST AI Risk Management Framework, NIST also provided the AI Risk Management Playbook which contains supporting actions and considerations for each subcategory control.

Below are example questions to focus on when assessing an organization’s current AI compliance posture relative to the Manage function within the NIST AI Risk Management Framework:

  1. How do the technical specifications and requirements align with the AI system’s goals and objectives?1
  2. What assessments has the entity conducted on data security and privacy impacts associated with the AI system? 2
  3. Does your organization have an existing governance structure that can be leveraged to oversee the organization’s use of AI? 3
  4. Has the system been reviewed to ensure the AI system complies with relevant laws, regulations, standards, and guidance? 4
  5. Did your organization implement a risk management system to address risks involved in deploying the identified AI solution? 5

What should companies consider implementing to support alignment with the NIST AI Risk Management Framework Measure function?

After assessing and documenting activities that involve AI systems against the Manage function, below are examples of AI compliance management activities to assist organizations in implementing remediating gaps or demonstrating privacy readiness and maturity:

  1. Regularly track and monitor negative risks and benefits throughout the AI system lifecycle including in post-deployment monitoring.6
  2. Assign risk management resources relative to established risk tolerance. AI systems with lower risk tolerances receive greater oversight, mitigation, and management resources.7
  3. Identify risk response plans and resources and organizational teams for carrying out response functions.8
  4. Document residual risks within risk response plans, denoting risks that have been accepted, transferred, or subject to minimal mitigation.9
  5. Identify resource allocation approaches for managing risks in systems deemed high-risk.10

The Manage function is focused on controlling for risks associated with AI systems including notifying individuals in the organization of such risks and tracking such risks through remediation. The Manage function aligns with Article 61 of the EU AI Act regarding post-market monitoring. Under this article, providers should establish a post-market monitoring system to actively and systematically collect, document, and analyze relevant data for the performance of high-risk AI systems throughout their lifetime and ensure continual compliance with requirements set out in Chapter 2 of the EU AI Act.

Notes: 

1. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 174.

2. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.

3. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.

4. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 178.

5. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 179.

6. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 174.

7. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.

8. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 178.

9. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 180.

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

 

Tags

ai advisory, risk management, article, governance, risk & compliance, cybersecurity & data privacy, data privacy & cyber risk, data strategy & governance

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with