This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 3 minutes read

An Introduction to the EU’s Artificial Intelligence Act

On December 8, 2023, European Union (EU) lawmakers reached an agreement on the EU’s AI Act.  The EU AI Act has many similar themes to the EU’s General Data Protection Regulation (GDPR) and reflects a big step forward in the governance of AI. 

This article provides an overview of the EU’s AI Act focused on applicability and thresholds.  Subsequent articles in this series will focus on implementing an AI risk management process and obligations that providers and users of AI tools have pursuant to the EU AI Act.

Applicability, Timing, and Penalties

The EU AI Act applies to a) providers who place on the market or put into service an AI system in the EU, irrespective of whether those providers are established within the EU, b) users of AI systems located within the Union, and c) providers and users of AI systems that are located outside of the EU, where the output of the produced systems is used in the EU.1

Similar to GDPR, developers of AI systems that target the EU market will be subject to the EU AI Act, but most multinational corporations utilizing AI will be drawn into the EU AI Act. For example, if a U.S. headquartered organization is utilizing AI in the context of making employment decisions related to employees in the EU, that U.S. organization will be subject to the EU AI Act. As described in our section below titled Threshold, only AI systems deemed high risk are subject to the EU AI Act requirements.

We anticipate a 24-month transition period with the EU AI Act expected to be adopted in the Spring of 2024 and enforced in 2025. 

The fines for non-compliance with the EU AI Act are similar to those seen under the GDPR but can be higher in certain circumstances. If the offender is a company, non-compliance for prohibited AI violations could result in fines of up to 30 million EUR or 6% of total revenue from the preceding year, whichever is higher.2 Most other violations could result in fines up to 20 million EUR or 4% of global annual turnover.

Threshold

The EU’s Act follows the principles of proportionality whereby systems that pose a high risk to the rights and safety of individuals must follow the requirements of the EU’s AI Act. Specifically, the EU’s AI Act differentiates between AI systems that result in (i) an unacceptable risk, (ii) a high risk, and (iii) a low or minimal risk. Unacceptable risks include those related to subliminal techniques leveraged to exploit vulnerabilities of specific vulnerable groups such as minors or persons with disabilities.

Per the EU AI Act, high-risk AI systems include those related AI systems related to the following:4

  1. Biometric identification and categorization of individuals
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, works management, and access to self-employment
  5. AI systems intended to be used by public authorities to evaluate the eligibility of individuals for public assistance
  6. Law enforcement
  7. Border control management
  8. Administration of justice and democratic processes

From a commercial perspective, we expect the most common high-risk AI systems will be centered around education, security (facial recognition), and the employment/recruiting function, especially for multinationals based outside the EU. Modern systems utilized by utilities to manage infrastructure will also deserve significant attention under the EU AI Act.

AI systems determined to be high risk are subject to many compliance requirements under the EU AI Act. Specifically, Chapters 2 and 3 of the EU AI Act contain roughly 25 pages of requirements focused on the following:5

  1. Implementing a risk management system
  2. Data and data governance 
  3. Technical document
  4. Record-keeping
  5. Transparency and provisions of information to users
  6. Human oversight
  7. Accuracy, robustness, and cybersecurity

Our next article in this series will focus on the practical aspects of implementing an AI risk management system as required under the EU’s AI Act. We will then focus on the responsibilities of users of AI as required by the EU AI Act.

1. EU AI Act Article 2 - Scope.
2. EU AI Act Article 71 - Penalties.
3. EU AI Act Section. 5.5.2. Prohibited Artificial Intelligence Practices.
4. EU AI Act. Annex III.
5. EI AI Act Chapter 2 - Requirements for High-Risk AI Systems.

© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.

Tags

emea, featured, article, data & technology, data privacy & cyber risk, data strategy & governance

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with