This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Social Media Links

| 3 minutes read

The EU’s AI Act: Obligations of AI Users and GDPR Article 35

In December 2023, European Union (EU) lawmakers reached an agreement on the EU AI Act. In our article titled An Introduction to the EU AI Act, we focused on applicability, thresholds, timing, and penalties related to the EU AI Act. In our second article, we focused on the responsibilities of the providers of high-risk AI systems. In this article, we focus on the responsibilities of users of AI as required by the EU AI Act.

Article 29 of the EU AI Act is titled “Obligations of the users of high-risk AI systems” and contains the following three requirements:

  1. User Oversight – Article 29 requires that users monitor the operation of the high-risk AI system and if they have reason to believe that an AI system is presenting a risk to the health, safety, or fundamental rights of an individual, the user needs to inform the provider or distributor and suspend the use of the system. Users must also inform the provider or distributor of any serious incident or malfunctioning of the system where per Article 3, a user, provider, and distributor are defined as: 
    1. User - any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;
    2. Provider - any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed to place it on the market or put it into service under its own name or trademark, whether for payment or free of charge; and
    3. Distributor – any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market without affecting its properties. 
  2. Maintaining Logs – Article 29 requires that users of high-risk AI systems keep logs automatically generated by that high-risk AI system, to the extent the logs are under their control.  As described in our prior article, such logs must contain a) the start date and time and end date and time of each use, b) the reference database against which the input data has been checked by the system, c) the input data for which the search has led to a match and d) the identification of the individual(s) involved in the verification of the results. 
  3. Data Protection Impact Assessments (DPIA) – Article 29 requires that users of high-risk AI systems conduct data protection impact assessments under Article 35 of the General Data Protection Regulation (GDPR). 

It is worth highlighting that the scope of AI systems and activities under the EU AI Act is narrower than that of the GDPR. In our first article in this series, we reviewed that the EU AI Act is largely focused on requirements for providers of high-risk AI systems that involve:

  1. Biometric identification and categorization of natural persons;
  2. Management and operation of critical infrastructure;
  3. Education and vocational training;
  4. Employment, works management, and access to self-employment;
  5. AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance;
  6. Law enforcement;
  7. Border control management; or
  8. Administration of justice and democratic processes.

On the other hand, GDPR Article 35 requires organizations to conduct DPIAs on high-risk processing activities that are defined by the European Data Protection Board (EDPB) as activities that involve profiling, automated decision-making, processing data on a large scale, matching or combing datasets, innovative use or application of technological solutions or where the processing itself prevents data subjects from exercising a right. This set of criteria for when to conduct a DPIA implicates almost all uses of AI systems. 

In summary, it is important for AI and data privacy experts to keep in mind that a high-risk AI system as defined by the EU AI Act has a much higher threshold and more narrow definition than a high-risk processing activity as defined by the GDPR.  Even if an organization is not a provider of AI systems, it will still need to conduct DPIAs on most, if not all, business processes that utilize AI. 

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.


article, data & technology, data privacy & cyber risk, data strategy & governance, f-risk

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with