The National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework (NIST AI 100-1) in January 2023.
The NIST AI Framework consists of 19 categories and 72 subcategories within the following four core functions:
- Govern
- Map
- Measure
- Manage
We covered the implementation of the Govern function here.
In this article, we focus on considerations when assessing and implementing the Map function of the NIST AI Risk Management Framework. The NIST AI Risk Management Framework can be utilized as a baseline control set to help support compliance with emerging AI regulations such as the EU AI Act.
The Map function includes five categories and 18 subcategory controls as listed in Table 1 below.
Table 1 | |
---|---|
Category | Subcategory |
MAP 1: Context is established and understood. | MAP 1.1: Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related Test & Evaluation, Validation & Verification (TEVV) and system metrics. |
MAP 1.2: Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized. | |
MAP 1.3: The organization’s mission and relevant goals for AI technology are understood and documented. | |
MAP 1.4: The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated. | |
MAP 1.5: Organizational risk tolerances are determined and documented. | |
MAP 1.6: System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks. | |
MAP 2: Categorization of the AI system is performed. | MAP 2.1: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders). |
MAP 2.2: Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making decisions and taking subsequent actions. | |
MAP 2.3: Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation. | |
MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. | MAP 3.1: Potential benefits of intended AI system functionality and performance are examined and documented. |
MAP 3.2: Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented. | |
MAP 3.3: Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization. | |
MAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed, and documented. | |
MAP 3.5: Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from the GOVERN function. | |
MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data. | MAP 4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third party’s intellectual property or other rights. |
MAP 4.2: Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented. | |
MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized. | MAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented. |
MAP 5.2: Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented. |
Below are example questions to focus on when assessing an organization’s current AI compliance posture relative to the Map function within the NIST AI Risk Management Framework:
- Which individuals within the organization are responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed? 1
- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders and end users? 2
- How do the technical specifications and requirements align with the AI system’s goals and objectives? 3
- To what extent are the metrics consistent with system goals, objectives, and constraints, including ethical and compliance considerations? 4
- What criteria and assumptions has the entity utilized when developing system risk tolerances? 5
What should organizations consider implementing to support alignment with the NIST AI Risk Management Framework Map function?
After assessing and documenting activities that involve AI systems against the Map function, below are examples of AI compliance management activities to assist organizations in implementing the remediation of gaps or to demonstrate AI readiness and maturity:
- Plan for risks related to human-AI configurations, and document requirements, roles, and responsibilities for human oversight of deployed systems. 6
- Establish interdisciplinary teams to reflect a wide range of skills, competencies, and capabilities for AI efforts. Document team composition. 7
- Build transparent practices into AI system development processes. Evaluate AI system purpose in consideration of potential risks and stated organizational principles. 8
- Reconsider the design, implementation strategy, or deployment of AI systems with potential impacts that do not reflect organizational values. 9
- Establish risk criteria in consideration of different sources of risk, (e.g., financial or operational) and different levels of risk (e.g., from negligible to critical). 10
The Map function is focused on identifying and linking system limitations, risks, benefits, and impacted individuals into the AI risk management framework. Along with the Govern function, the Map function also has related requirements for high-risk AI systems per Chapter 2 of the EU AI Act and therefore, it is effective to utilize the NIST AI Risk Management Framework to assess your AI systems' compliance with such regulations.
In our next article in this series, we will focus on implementing the Measure function of the NIST AI Risk Management Framework.
Notes:
1. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 52.
2. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 57.
3. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 59.
4. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 61.
5. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 64.
6. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 52.
7. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 56.
8. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 59.
9. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 61.
10. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 64.
© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.