This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Subscribe

Social Media Links

| 6 minute read

Generative AI Risks: Legal and Compliance Insights - Part 2

The Bottomline: Five Practical Steps for Generative AI Risk Management

As the first line of defense, employees within business operations must own and manage risks related to the business, including risks resulting from the use of generative artificial intelligence (AI) and emerging technologies, to achieve organizational objectives. However, it is incumbent on the compliance function, as the second level of defense and connective tissue within an organization, to evaluate and enhance compliance processes and controls to effectively identify and mitigate emergent AI-related risks. While many of the specific risks that generative AI systems pose to companies are heavily context/industry-specific, we outline below five overarching practical steps that companies adopting these technologies should consider implementing to right-size their compliance program for changes in the organization’s risk profile due to generative AI: 

1. Recalibrate the Compliance Governance and Oversight Structure for AI Risks

The U.S. Sentencing Commission’s definition of an effective compliance and ethics program and the Department of Justice (DOJ) Criminal Division’s Evaluation of Corporate Compliance Programs (ECCP) guidance both highlight the important role that boards and governing bodies have in oversight of the performance of the compliance function and the effectiveness of the compliance program. Effective program governance includes clearly defining roles and responsibilities and information reporting processes about compliance risk management to support oversight and decision-making.

With the minefield of potential AI risks that companies must navigate across multiple business functions, it is critical that companies recalibrate their governance and oversight mechanisms, including governance strategy, organizational model, and roles and responsibilities, to consider the impacts of generative AI on the business. A company’s board of directors and relevant sub-committees should be knowledgeable about the company’s use of AI and its associated risks and exercise reasonable oversight of enhancements to the compliance program.

As a practical matter, a company should consider updating its compliance charter (or other written governance documents) to clearly define roles, working relationships, and reporting obligations related to generative AI risk management by business, IT, legal, compliance, and audit leaders. Furthermore, a company should consider appointing an AI expert to an existing governing body or, for companies in heavily regulated industries using AI, consider establishing an AI ethics committee to affect change, influence company decisions, and ensure that the deployment of AI strategies aligns with the company’s ethical standards and company policies. 

2. Refresh the Enterprise-Wide Compliance Risk Assessment To Incorporate Emergent AI Risks

Risk assessment is the foundation of an effective compliance program. As an immediate practical step, compliance leaders should consider refreshing their compliance risk assessment to incorporate generative AI use risks. A detailed and thoughtfully designed risk assessment can surface pockets of previously unknown inherent risk within an organization. As discussed in part one of our two-part series on Generative AI risk management, the use of AI can touch many business functions — from cybersecurity to marketing to HR — and the risks are real and evolving. Therefore, it is critical that legal and compliance leaders prioritize the assessment of generative AI risks (across geographies and business units) and embark on a process of designing and implementing the appropriate risk mitigation measures.

To be sure, the notion of identifying and evaluating generative AI risks may be daunting for compliance professionals who may lack the requisite technology expertise. Therefore, it is imperative that compliance leaders seek out appropriate subject matter experts (either in-house or external) to aid in proper risk identification and design of new processes and controls to mitigate AI-related risks.

Following the risk assessment, legal and compliance leaders should prioritize the highest risk areas within the business for monitoring activities and identify improvement opportunities in existing controls related to the prevention and detection of risk. 

3. Implement Enhanced Prevention and Detect Measures To Support AI-Related Operational Excellence

A well-designed compliance program has a robust set of processes and controls to prevent and detect misconduct. For example, preventative measures include policies, procedures, and education while detective measures include auditing, monitoring, and confidential reporting. Legal and compliance professionals should revisit existing policies and procedures and consider if updates are appropriate to reflect how the business is using generative AI technology and tools in its business operations and product or service offerings.

More broadly, companies should consider implementing the following enhanced control measures:

  • Define processes for intake and implementation of new regulations (both in the United States and aboard) relative to AI and emerging technologies; staying abreast of evolving laws and industry-specific regulations and leading practices is more important than ever to ensure that AI systems meet all relevant requirements and to avoid non-compliance; legal and compliance leaders should consider seeking advice (internal or external) from AI technologists and regulatory experts to help them navigate the complexities of AI governance and risk management.
     
  • Incorporate clear markings on material that has been created by generative AI so employees are aware of what content must be reviewed prior to publication or delivery to customers or clients. Labeling AI-generated content clearly helps maintain trust with consumers. It is not enough to rely on technology alone; human oversight remains crucial.
     
  • Implement regular monitoring and review of AI system usage and testing for the trustworthiness of output; compliance leaders should liaison with IT and other relevant functions to ensure that strict security protocols are in place to protect AI models from penetration by bad actors and to consider the degree to which compliance professionals can support real-time monitoring of AI system use (e.g., monitoring AI outputs for potential IP violations); compliance leaders should also work with internal audit to design an audit program around sample testing the output from AI systems for compliance with company policy and when promulgated, applicable federal standards and regulations (e.g., internal audit and HR conduct periodic audits of AI decision-making systems for bias in HR processes).
     
  • Define processes for early warning of enforcement signals relative to peer companies (e.g., companies engaged in AI washing); the Department of Justice's (DOJ’s) Evaluation of Corporate Compliance Program (ECCP) highlights that companies should understand and assess risk factors and lessons learned from peer companies to enhance their compliance programs.

4. Implement Appropriate Security and Quality Controls Over Data Inputs to AI Systems

AI models often rely on vast amounts of data inputs sourced from disparate systems. If the data inputs are manipulated or poisoned, the AI-generated outputs will be skewed and potentially harmful to the organization and consumers. Therefore, it is critical that companies enhance data security and quality controls over data inputs to AI systems. Legal and compliance leaders must maintain close partnerships with IT and cybersecurity to embed and integrate compliance steps and quality controls over data inputs to AI systems into business processes. The integrity of data inputs is foundational to an effective and useful AI system.

5. Roll Out AI Compliance Training to Employees Based on Risk Exposure

Comprehensive, practical, and periodic compliance training and education for directors, officers, employees, and certain third parties are hallmarks of an effective ethics and compliance program. Companies should consider updating their compliance training programs to incorporate educational material and employee communications addressing generative AI and related emerging technologies.

Many organizations require all employees to complete a mandatory, and often annual, code of conduct training and supplement the broad-based code training with additional modules focused on specific risk topics such as data privacy, cybersecurity, anti-bribery, and trade sanctions. Organizations could consider updating their all-employee compliance training to highlight the company’s use and risks of AI while also developing and deploying enhanced training to certain employee groups based on compliance risk exposure inherent in certain roles.

To enhance the training experience and effectiveness, companies should also consider incorporating the following into their training programs:

  • Gamification techniques and scenario-based, interactive learning to support knowledge retention in an immersive environment. 
  • Deep-fake video (of a senior business leader, for example) to showcase the power of AI and how it can easily be utilized by bad actors to influence and cause harm to an organization or mislead consumers. 
  • Communication from the CEO, or another senior business leader, at the outset of the employee training on AI to highlight its importance and alignment with the company’s values, culture, and mission.

Companies could also consider conducting a tabletop exercise for the board of directors, audit committee, and/or risk committee on generative AI and emergent risks, including how the company is using AI in its business operations and products or services, and the company’s risk profile and risk mitigation strategy and framework related to AI.

 

Sign up to receive all the latest insights from Ankura. Subscribe now

© Copyright 2025. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
 

Tags

ai advisory, conflict, performance, risk management, article, f-performance, f-risk, f-strategy, finance, forensics & investigations, risk & compliance, financial services, financial services disputes, forensic accounting

Let’s Connect

We solve problems by operating as one firm to deliver for our clients. Where others advise, we solve. Where others consult, we partner.

I’m interested in

I need help with