The Pending AI Regulatory Timeline: Preparing for AI Compliance
Artificial Intelligence (AI) is revolutionizing industries globally, from healthcare to finance, retail, technology, and education, enabling businesses and consumers alike to achieve their tasks more efficiently and effectively. As AI adoption accelerates, organizations utilizing AI must brace for new regulatory landscapes and compliance with emerging AI regulations such as the Colorado AI Law and the EU AI Act. The Colorado AI Law is set to take effect on February 1, 2026, while the first set of provisions under the EU AI Act are already being enforced, with regulations for high-risk AI becoming mandatory in August 2026.
We have previously written about how organizations utilizing high-risk AI systems will need to develop and implement an AI Risk Management System as the foundation for their AI compliance program. During 2024, Ankura helped organizations assess their existing AI compliance efforts relative to the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) to develop an AI regulatory compliance roadmap.
As organizations gear up for similar projects this year, those aiming to comply with the Colorado AI Law by its enactment on February 1, 2026, should initiate planning and readiness efforts now. This article serves as a guide to understanding AI categories, providing a foundation for effective risk management.
Understanding AI Categories: A Foundation for Risk Management
AI regulatory compliance is an emerging area for many professionals. In numerous organizations, the responsibility for AI regulatory compliance often resides within the privacy or cybersecurity office. However, AI regulatory compliance obligations seldom align with traditional privacy or cybersecurity requirements. As a result, these offices frequently face challenges in interpreting AI compliance obligations and translating them into actionable steps.
This presents a unique opportunity for organizations to collaborate with experts like Ankura for guidance on achieving AI regulatory compliance. At the start of our AI regulatory compliance projects, we find it beneficial to thoroughly explain the various types of AI systems, including current and future state applications. It is essential to categorize these AI systems to assess the risk levels and determine the necessary actions for regulatory compliance. The following sections explore the distinct categories that define these systems.
Five Types of AI Categories
The term AI is used to describe the subsect of machine learning (ML) which derives insights from extensive datasets to simulate humanlike intelligent behaviors, such as thinking, understanding, and accomplishing tasks within the capacity of computers or other machines.1
The broad concept of AI is broken down into various categories based on its capabilities and use cases, ranging from simple automation to complex predictive ML models, including existing and hypothetical models that are in scope for the near future. Although AI regulations do not yet have an official definition of AI, we know AI can be categorized as follows:
1. Narrow AI2
Defining Narrow AI, or Weak AI: A system that is designed to perform a specific task. Narrow AI can be broken down into the following subcategories:
- Reactive Machine AI: Only leverages available data to perform a specific task but does not store past experiences. A few examples include game bots, thermostats, and self-driving cars.
- Limited Memory AI: Uses past experiences to inform current decisions. These machines cannot add memory and data to their experience library. A few examples include fraud detection systems, virtual assistants, and spam filters.
- Specialized AI: Designed for specific industry applications and tailored to meet the needs of specific tasks.3 A few examples include:
- Financial AI: algorithmic trading, fraud detection, and risk management
- Industrial AI: predictive maintenance, supply chain optimization, and quality control
- Educational AI: tutoring and personalized learning platforms
2. General AI
Defining General AI, or Strong AI: General AI/Strong AI would enable machines to develop a versatile understanding of the world with agile adaptability and human-like problem-solving capabilities, whereas Narrow AI lacks the flexibility of adapting to new tasks in this way.
Researchers are yet to achieve Strong AI but companies like Microsoft are heavily investing in its development (OpenAI).
3. High-Risk AI
Defining High-Risk AI Systems: Systems that are typically characterized by their ability to make “consequential decisions.” These decisions often affect critical areas such as education, employment, financial services, housing, healthcare, or legal services. The impact of these AI-driven decisions can have significant impacts such as resulting in the denial of housing, education, or financial services based on the outcome of the AI process.
Key considerations for High-Risk AI include:
Avoiding Algorithmic Discrimination and Developer Responsibilities: It is critical that automated decision-making AI systems are not developed with algorithmic discrimination or bias and that the deployments of high-risk AI systems are reviewed regularly to ensure so. Similar to human decision-making, AI systems typically rely on heuristics, or cognitive rules and shortcuts, to process large amounts of data efficiently. Developers have a fiduciary responsibility to ensure that these systems incorporate principles like the Equality Heuristic, which prioritizes fairness and prevents discriminatory outcomes.4
Critical Deployment Areas and Implications for Oversight: High-risk AI systems are often deployed in critical areas where human errors or misuse can lead to severe consequences, so if not properly designed, heuristics could introduce biases that affect critical industries where fairness and accuracy are crucial, such as healthcare or autonomous vehicles. Ensuring AI systems do not discriminate will require a contemporaneous review (e.g., real-time monitoring or periodic audits) of automated decisions with human intervention where necessary. This approach will likely become best practice in the field of privacy by design.
Applicable Compliance and Legal Frameworks: Properly trained and deployed high-risk AI systems can augment safety and precision, mitigating human error across various industries and environments. Both the Colorado AI Law and the EU AI Act maintain specific definitions and examples of what constitutes high-risk systems. Furthermore, high-risk systems as defined by the Colorado AI Law and EU AI Act trigger specific compliance obligations.
4. Quantum AI5
Defining Quantum AI: Quantum computing uses qubits, or quantum bits, which can exist in multiple states simultaneously, whereas bits can only exist in one of two.6 Qubits enable quantum computers to process complex computations at extraordinary speeds (100 million times faster than a regular computer chip), so they can solve problems that are currently unsolvable for your typical desktop or laptop. Quantum AI has the potential to revolutionize AI systems and drive new applications.
Practical Applications of Quantum Machine Learning (QML): Combining machine learning with quantum computing can run quantum algorithms that improve the efficiency and accuracy of machine learning models.7 These can be utilized in various cases:
- Medicine and Healthcare: Accurate simulations of biological processes can improve diagnostics, drug discovery, and personalized medicine.
- Materials Science: Accurate simulations of atomic structure enable the facilitation of the discovery and design of new materials with tailored characteristics.
- Sensing: Quantum sensors can measure physical quantities with extreme precision which can enhance medical imaging, environmental monitoring, predictive simulations, and navigation.
- Communication Systems: Leverage quantum principles, like entanglement, to develop nearly unhackable secure network traffic channels.8
5. Artificial Superintelligence
Defining Artificial Superintelligence, or Super AI: A hypothetical AI system that may outperform humans for any task imaginable. This would be like an immortal genius able to comprehend any scientific domain and solve virtually any problem human beings might face. An AI system that may be superior to any human in cognitive problem-solving and creativity raises ethical and social debates which may delay the inevitable when technology accelerates to a point where this can exist.
Understanding AI Risks: Key Takeaways for Navigating Compliance
From the existing Narrow AI to the hypothetical Artificial Superintelligence, the evolution of AI presents extraordinary opportunities along with significant challenges. For organizations in any industry, navigating these challenges requires an understanding of the extraordinarily complex varying levels of risks and compliance obligations associated with AI regulations. To harness the power of AI successfully, organizations must conduct thorough AI risk assessments, categorize AI systems accurately, and develop and implement comprehensive AI governance programs. By doing so, they can remain compliant with the emerging regulatory landscape while leveraging various types of AI to their full potential to drive efficiency and innovation.
Interested in learning more? Check out our AI Governance and Risk Management Masterclass.
Elevate your expertise in AI governance and risk management with our comprehensive masterclass, hosted by Ankura, Mayer Brown, and BreezeML. Gain in-depth insights and effective strategies for implementing the NIST AI Risk Management Framework standards. Get your complimentary access today.
Get the latest must-reads and insights surrounding data privacy and AI regulatory compliance delivered straight to your inbox.
For more information and continued learning, sign up for our newsletter, the Data Privacy Reporter.
Sources
[1] “Artificial Intelligence, N.” Oxford English Dictionary, Oxford UP, December 2023
[2] Satokangas, Kim. "Artificial Intelligence & Large Language Models: Cooperative AI story writing." (2024).
[3] AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems
[4]Savion, L. (2015). The rationality of mental operations [Course]. Indiana University.
[5]Raj, P., Kumar, A., Dubey, A. K., Bhatia, S., & Manoj, O. S. (2023). Quantum computing and artificial intelligence: Training machine and deep learning algorithms on quantum computers. Walter de Gruyter GmbH & Co KG.
[6]Google Quantum AI Article: What is quantum computing?
[7]Spair, R. (2023) The quantum leap: How quantum computing will revolutionize AI: #quantumcomputing.
[8]A day in the life of a quantum technology company (2024) Quantopticon.
[9]Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford UP, 2014.

Sign up to receive all the latest insights from Ankura. Subscribe now
© Copyright 2025. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.