This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 3 minute read

Using the NIST Artificial Intelligence Framework to Assess AI Risk and Build an AI Regulatory Compliance Program

This article is a continuation of our article series focused on the management of AI regulatory compliance risk. Our first article highlighted privacy topics related to collecting personal information via AI applications, transparency, and the challenges associated with regulating AI.

The second article focused on considerations for customer-facing AI applications (i.e., external-facing AI systems) whereas our third article focused on privacy topics related to employees’ use of AI (i.e., internal-facing AI systems). 

We will now focus on leveraging the NIST AI Risk Management Framework to support compliance with the EU’s AI Act and other emerging AI regulations.

What Is the NIST AI Risk Management Framework?

The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, published in January 2023, was designed to equip organizations with approaches that increase the trustworthiness of AI systems and to help foster the responsible design, development, deployment, and use of AI systems.1

The NIST AI Framework consists of 19 categories, and 72 subcategories within the following four core functions:

  1. Govern
  2. Map
  3. Measure
  4. Manage

NIST defines the Govern function as a cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.2 The Map function is focused on identifying and linking system limitations, risks, benefits and impacted individuals into the AI risk management framework. The Measure function is focused on developing business processes to measure AI risk and then position the organization to remediate or improve topics such as false positives, bias, and the unintended uses of the AI systems. Lastly, the Manage function is focused on controlling for risks associated with AI systems including notifying individuals in the organization of such risks and tracking risks through a remediation process.

Utilizing the NIST Risk Management Framework

The NIST AI Risk Management Framework can be utilized as a baseline control set to support compliance with existing and emerging AI regulations.

As we have previously written about, in Chapter 2, Article 9 of the EU AI Act, providers of high-risk AI systems must implement and document a risk management system. Per Article 9, risk management systems are to include the following: 

  1. an analysis of the known and potential risks associated with the system;
  2. an evaluation of the risks associated with the system both when it is utilized for its intended purpose but also if the system is misused; and 
  3. the adoption of risk management measures that include a testing protocol to ensure the system is performing consistent with its intended purpose. 

Organizations may decide to align with the NIST AI Risk Management framework to support compliance with Chapter 2, Article 9 of the EU AI Act. Below we have listed a series of category controls from the NIST AI Risk Management framework that map to requirements in the EU AI Act.

  1. GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.
  2. GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk.
  3. MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data.
  4. MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized.
  5. MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.
  6. MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.
  7. MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

How Organizations Can Stay Ahead of Changing AI Laws

Utilizing the NIST AI Risk Management framework as a baseline control set and then mapping in the requirements from the EU AI Act and other emerging AI and privacy regulations will allow organizations to develop a comprehensive AI compliance program. For example, when Ankura supports organizations by utilizing the NIST AI Risk Management framework mapped to existing and emerging AI and data privacy regulations, Ankura is preparing organizations for the in-scope AI regulations in all jurisdictions where the organization conducts business. Ultimately, an exercise such as this will harmonize the AI regulations while building a standard global AI and data privacy compliance program that can be leveraged to meet future AI requirements. 

Notes:

1. NIST AI 100-1.  Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 2.

2. NIST AI 100-1.  Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 21.

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

Tags

article, cybersecurity & data privacy, data & technology, data privacy & cyber risk

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with