This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 3 minute read

EU’s AI Act: AI Risk Management System

In December 2023, European Union (EU) lawmakers reached an agreement on the EU AI Act. In our prior article titled “An Introduction to the EU AI Act,” we focused on the applicability, timing, and penalties of the EU AI Act. We also described the threshold related to identifying high-risk AI systems. Next, we focus on the requirements of Chapter 2, Articles 9-15 titled “Requirements for High-Risk AI Systems” in the EU AI Act. 

  • Article 9 - Implementing a risk management system: Requires that providers of high-risk AI systems adopt risk management systems that are both implemented and documented. Per Article 9, risk management systems are to include the following: a) an analysis of the known and potential risks associated with the system, b) an evaluation of the risks associated with the system both when it is utilized for its intended purpose but also if the system is misused and c) the adoption of risk management measures that includes a testing protocol to ensure the system is performing consistent with its intended purpose. 
     
  • Article 10 - Data and data governance: This is focused on the training, validation, and testing of the underlying AI models. Specifically, Article 10 requires that providers of high-risk AI systems implement data governance protocols focused on design choices, data collection, data preparation, formulation of model assumptions, assessment of the suitability of the data sets needed, evaluation of potential biases, and identification of possible data gaps.
     
  • Article 11 - Technical documentation: Requires that providers of high-risk AI systems create technical documentation prior to the high-risk AI system being placed on the market. The technical documentation is to be created in a manner that demonstrates compliance with the requirements set forth in Chapter 2 of the EU AI Act (i.e., Articles 9 through 15).
     
  • Article 12 - Record-keeping: Requires that high-risk AI systems are designed in a manner that they maintain automatic logging of events. Specifically, the logging function is to include a) the start date and time and end date and time of each use, b) the reference database against which the input data has been checked by the system, c) the input data for which the search has led to a match and d) the identification of the individuals involved in the verification of the results pursuant to Article 14 on human oversight described below. 
     
  • Article 13 - Transparency and provisions of information to users: High-risk AI systems are required to be designed transparently so that users can interpret the system’s output and use the output appropriately. Article 13 also requires that high-risk AI systems be accompanied by concise, complete, and clear instructions. Such instructions are also to document the accuracy, robustness, and cybersecurity requirements described in Article 15 below.
     
  • Article 14 - Human oversight: High-risk AI systems are required to be designed with appropriate interface tools so that the AI systems can be overseen by humans. Specifically, human oversight as defined by Article 14 includes: a) fully understanding the capacities and limitations of the high-risk AI system, b) being aware of the possible tendency of automatically relying or over-relying on the output produced by the high-risk AI system, c) being able to correctly interpret the system’s output, d) being able to decide when to not use or disregard the system output and e) being able to stop the AI system.
     
  • Article 15 - Accuracy, robustness, and cybersecurity: Requires that technical solutions to support accuracy, robustness, and cybersecurity must prevent and control attacks trying to manipulate the training dataset which would cause the model to make a mistake. 

In conclusion, under the EU's AI Act, the development or use of high-risk AI requires compliance with a risk management system, which is essentially a risk management process encompassing core General Data Protection Regulation (GDPR) privacy principles (Transparency, Purpose Limitation, Accuracy, Integrity and Confidentiality, and Accountability). Any use of high-risk AI must incorporate the risk management system requirements into both privacy and AI compliance programs.

Our next article in this series will focus on the responsibilities of users of AI as required by the EU AI Act.

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

Tags

article, cybersecurity & data privacy, data & technology, data privacy & cyber risk, data strategy & governance

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with