This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 6 minutes read

Implementing the NIST Artificial Intelligence Risk Management Framework – Govern

The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, published in January 2023, was designed to equip organizations with an approach that increases the trustworthiness of AI systems and fosters the responsible design, development, deployment, and use of AI systems.1

The NIST AI Risk Management Framework will serve as a foundational component when developing an AI regulatory compliance program that meets the requirements of emerging AI laws. We have previously written about the EU AI Act in an article titled “An Introduction to the EU AI Act,” where we focused on applicability, timing, and penalties related to the EU AI Act. We also wrote about the requirements of Chapter 2 of the EU AI Act, titled “Requirements for High-Risk AI Systems.”

Given the complexity of the NIST AI Risk Management Framework, we are publishing a series of articles focused on implementing the framework. This article is focused on the first of the four core functions which is called Govern. NIST defines the Govern function as a cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.2

The Govern function includes six categories and 19 subcategory controls as listed in Table 1 below.

Table 1

CategorySubcategory
GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented.
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
GOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
GOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.
GOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.
GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
GOVERN 1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.

GOVERN 2:

Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

GOVERN 2.1: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
GOVERN 2.2: The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
GOVERN 2.3: Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.
GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.GOVERN 3.1: Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).
GOVERN 3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.
GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk.GOVERN 4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and use of AI systems to minimize potential negative impacts.
GOVERN 4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate, and use, and they communicate about the impacts more broadly.
GOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.
GOVERN 5: Processes are in place for robust engagement with relevant AI actors.GOVERN 5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.
GOVERN 5.2: Mechanisms are established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.
GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.GOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party’s intellectual property or other rights.
GOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How can organizations use the NIST AI Risk Management Framework Controls to assess activities that involve AI systems for the Govern function?

Along with the NIST AI Risk Management Framework, NIST also provides an AI Risk Management Framework Playbook (AI RMF Playbook) which contains supporting actions and considerations for each subcategory control. The AI RMF Playbook provides suggestions on what should be assessed and documented relative to the Govern function within the NIST AI Risk Management Framework.

Examples include:

  1. Has the organization defined and documented the AI regulatory environment – including minimum requirements in laws and regulations and has the in-scope AI system been reviewed for its compliance with that regulatory environment? 3
  2. What policies has the organization developed to ensure the use of the AI systems is consistent with its intended use, values, and principles? 4
  3. What policies has the organization developed to ensure the use for the AI system is consistent with organizational risk tolerances and how do those assessments inform risk tolerance decisions? 5
  4. What are the roles and responsibilities of personnel involved in the design, development, deployment, assessment, and monitoring of the AI system? 6
  5. What processes exist for data generation, acquisition/collection, ingestion, staging/storage, transformations, security, maintenance, and dissemination? 7

The Govern functions and specific actions from the AI RMF Playbook are related to the requirements for high-risk AI systems per Chapter 2 of the EU AI Act and therefore illustrate the effectiveness of using NIST to assess your AI systems and processes.

What should organizations consider implementing to support alignment with the NIST AI Risk Management Govern function?

After assessing and documenting activities that involve AI systems against the Govern function, organizations should review and identify the appropriate AI compliance management activities to remediate gaps and demonstrate AI compliance readiness and maturity. The AI RMF Playbook provides suggested actions relative to the Govern function. Examples include:

  1. Develop and maintain policies for training (and re-training) organizational staff about necessary legal or regulatory considerations that may impact AI-related design, development, and deployment activities.8
  2. Update existing data governance and data privacy policies and practices, particularly the use of sensitive or otherwise risky data in the AI governance framework.9
  3. Establish policies to define mechanisms for measuring or understanding an AI system’s potential impacts, e.g., via regular impact assessments at key stages in the AI lifecycle.10
  4. Establish policies for AI system incident response or confirm that existing incident response policies apply to AI systems.11
  5. Establish policies that define the creation and maintenance of AI system inventories.12 

The AI compliance risk profile is different for every organization and will require expertise in both conducting privacy risk assessments and the unique challenges that using AI systems presents. It is important to evaluate AI compliance risk or gaps relative to an accepted privacy framework such as the NIST AI Risk Management Framework, and then prioritize which compliance activities should be implemented to comply with relevant regulations such as the EU AI Act.

In our next article, we will focus on implementing the Map function of the NIST AI Risk Management Framework. 

Notes

1. NIST AI 100-1.  Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 2.
2. NIST AI 100-1.  Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 21.
3. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 3.
4. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 5.
5. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 8.
6. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 10.
7. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 15.
8. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 3.
9. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 4.
10. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 7.
11. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 12.
12. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 14.

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

Tags

ai advisory, article, data privacy & cyber risk, data strategy & governance, governance, risk & compliance, risk management

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with