Introduction
Generative artificial intelligence (AI) is transforming how businesses operate, from content creation to decision-making and even legal and compliance analysis. The use of machines to perform core deliberative and inventive functions offers tantalizing potential boosts to efficiency and productivity, and businesses are responding with billions in investments and rapidly increasing rates of adoption. But beneath the promise of generative AI lies a minefield of potential risks that companies must identify with care and navigate with caution. The adoption of generative AI without appropriate guardrails in place exposes organizations to potential legal, compliance, ethical, and operational risks and challenges that, if left unchecked, could lead to costly litigation, financial penalties, reputational damage, or regulatory scrutiny. As legal and compliance professionals, particularly those in corporate environments, the challenge is not just to recognize these risks but to build and implement comprehensive compliance frameworks that ensure emerging technologies, such as generative AI, serve the company responsibly. From intellectual property concerns to data privacy, the responsibility of safeguarding against AI-related risks extends across multiple departments within an organization, each with its own critical role to play.
One of the most significant challenges in managing compliance risks associated with generative AI is the patchwork of uncertain legal and regulatory landscape. Congress has yet to promulgate federal standards on foundational issues related to AI safety, trustworthiness, and accountability. Courts are wrestling over the extent to which basic legal principles that undergird the economy —such as property rights and liability rules — should apply to smart machine output. State legislatures and federal agencies are stepping in to fill the rulemaking void, but they are in a reactive posture and are struggling to keep up with technological developments. Generative AI simply has not existed long enough for consensus to emerge around private industry standards.
The upshot is that general counsels and compliance officers across the country are having to construct AI safety and compliance programs from whole cloth, with little experience or uniform authority on which to rely, to address significant risks with which they may be unfamiliar. Full compliance demands more than just an omnibus approach. In order to properly monitor it, you need to understand how AI works in theory — how it is probabilistic and built to run in a linear fashion. Generative AI models produce responses based on probabilistic patterns learned from training data, often generating plausible-sounding outputs even when certainty is lacking. This, combined with their design to continue prompts regardless of data gaps, introduces unique compliance challenges that vary significantly between departments. Marketing, HR, and finance — all these areas interact with AI in a different way, which calls for specific compliance policies and procedures that emphasize transparency, accountability, and data-driven oversight. This article covers the importance of moving from generic, overall compliance guidelines to context-dependent approaches, recognizing the role that AI can play in maintaining both regulatory compliance and company security.
Cybersecurity: A New Frontier of Vulnerabilities
In the realm of cybersecurity, the integration of generative AI opens doors to both opportunity and vulnerability. On the one hand, AI systems can enhance defenses by rapidly analyzing threats and responding to cyberattacks in real time. On the other hand, these very same systems can become targets themselves. Cybersecurity teams must continuously grapple with potential vulnerabilities that stem from model inversion attacks, where malicious actors reverse-engineer an AI model to extract sensitive information. Equally dangerous is the threat of data poisoning, where bad actors deliberately feed corrupt data into AI systems, skewing the output, and rendering the systems ineffective. Legal departments, in collaboration with IT, must ensure that strict security protocols are in place to protect AI models. Regular penetration tests, securing data used in AI training, and real-time monitoring are essential safeguards in this emerging landscape.
A close partnership between legal and cybersecurity/IT is critical to reducing liability, particularly as regulations around AI security are still in development.
Intellectual Property (IP): Protecting What’s Yours (and Theirs)
Courts across the country are grappling with the extent to which IP rights apply in the context of generative AI. Several foundational questions with enormously significant consequences to companies’ bottom lines and liability exposure remain largely unresolved, including whether/to what extent the use of copyrighted materials to train generative AI systems might be considered fair use or infringement; whether AI output is copyrightable; and AI output can infringe. Ultimately, Congress will likely have to supplement existing copyright laws to answer these questions. In the meantime, businesses are well-advised to err on the side of caution. Legal departments should consider implementing policies and procedures to ensure that datasets used for training AI models are either licensed or in the public domain. Companies should assume that their own generative AI output is not subject to copyright protection. Finally, companies should assume that AI output can infringe existing works, and carefully monitor AI outputs for potential IP violations — specifically, “substantial similarity” between AI-generated outputs and existing protected works. Monitoring AI inputs and outputs for potential IP violations becomes an ongoing task, requiring diligence and cross-departmental cooperation.
Human Resources (HR): Tackling Bias and Promoting Fairness
While generative AI promises efficiency, its deployment in HR can be fraught with risks, particularly when it comes to bias. Generative AI models trained on historical data may replicate existing biases, leading to discriminatory outcomes in hiring, promotions, or employee evaluations. Companies eager to leverage AI’s speed and analytical power in HR processes must recognize that these systems, left unchecked, could perpetuate inequalities and result in legal and reputational harm to the organization. Employment and federal civil rights laws prohibit discrimination, even when it is done unintentionally, and companies found to be using biased AI in their HR processes could face significant penalties. Indeed, the heads of four primary departments responsible for enforcing federal civil rights laws — Department of Justice (DOJ), Civil Rights, the Equal Employment Opportunity Commission, the Federal Trade Commission, and the Consumer Financial Protection Bureau — have issued statements declaring, essentially, that in their view, the federal civil rights laws apply to smart machine output as if it were human conduct.
But it is not just about avoiding legal trouble — there is a moral imperative to ensure that generative AI systems promote fairness and diversity. To address this, HR departments, working closely with legal and Diversity, Equity, and Inclusion (DEI) teams, should consider conducting regular audits of AI decision-making systems and ensure the AI models are trained on diverse and representative datasets. Only then can organizations safeguard themselves from claims of bias and create a truly inclusive workplace.
Compliance: Navigating Complex Regulatory Waters
As industries from healthcare to financial services rapidly adopt generative AI technologies, legal and compliance teams find themselves in uncharted territory. Regulatory bodies, both in the United States and abroad, are racing to keep up with AI’s evolving capabilities. Whether it is the General Data Protection Regulation in Europe or the AI-specific regulations emerging in various U.S. states, companies need to stay ahead of the curve to avoid non-compliance. For legal and compliance officers, this means developing a proactive governance strategy and framework for risk mitigation. One of the key risks is failing to adhere to industry-specific regulations that may not have clear guidelines for AI but still impose stringent standards around data handling, transparency, and accountability. Legal teams must stay informed about regulatory changes and ensure that their AI systems meet all relevant requirements. This involves not only internal audits and monitoring but also external consultations with regulatory experts to navigate the complexities of AI governance.
Marketing: Ensuring Authenticity in Content Creation and Preventing AI “Washing”
The marketing department has one of the most exciting yet potentially precarious relationships with generative AI. AI can produce vast quantities of content — ads, social media posts, product descriptions — at a scale and speed that no human team could match. But with this power comes a new set of risks. Using AI to create customer-facing content introduces concerns about originality, copyright, and consumer trust. A well-meaning marketing campaign could quickly backfire if AI generates content that inadvertently borrows from protected works or, worse, crosses ethical boundaries. Marketing teams must balance creativity with compliance, ensuring that the content they deploy is both original and legally sound. Legal departments should collaborate closely with marketing teams to develop clear guidelines for AI-generated content, making sure that it meets legal and ethical standards before it goes public.
Marketing teams must also be wary of “AI washing” — that is, exaggerations or outright false claims by a company as to how it is using AI systems to enhance its operations, and/or the effects that AI systems are having on the company’s productivity or profitability. Given the fanfare around generative AI and the enormous resources that companies are pouring into technology, investors expect to see — and boards are eager to demonstrate — meaningful returns. However, any public representations that a company makes regarding its use of AI systems must be carefully inspected by IT and legal departments to ensure accuracy and clear demonstrability to ensure that it cannot be viewed as potentially misleading to investors. U.S. regulators, particularly the Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC), are keenly focused on scrutinizing corporate representations regarding AI use, and over the past two years have brought several actions that have resulted in significant financial penalties against companies who could not back up their public claims regarding the benefits they were reaping because of AI adoption. Thus, it is imperative that companies adopt and implement multi-dimensional compliance processes designed to ensure accuracy. Those systems should, at a minimum, require: (1) participation from marketing, IT, and management personnel, (2) adequate documentation and other evidence ensuring that all AI-related statements are factual and substantiated, and (3) regular audits and training for employees responsible for AI-related communications.
Ethics Committees: Defining the Boundaries
For some organizations, particularly those in regulated industries like healthcare or financial services, the establishment of an internal AI ethics committee is becoming standard practice. These committees are tasked with overseeing how AI is used within the organization and ensuring that its deployment aligns with the company’s ethical standards and company policies. The role of an ethics committee is to ask the challenging questions: Are AI-generated decisions transparent and explainable? Are we considering the long-term societal impact of our AI tools? As legal and compliance frameworks around AI are still developing, these committees provide an essential layer of oversight, guiding companies through ethical dilemmas that may not yet be codified in law but are critical to maintaining public trust.
Criminal Exposure: Preventing Intentional Misuse of AI Systems
In September 2024, the DOJ’s Criminal Division released updates to its Evaluation of Corporate Compliance Programs (ECCP) guidance which federal prosecutors use to evaluate “the adequacy and effectiveness” of a business’s compliance program at the time that a criminal offense occurs within the entity, as part of the overall analysis as to whether the entity should be charged along with any offending individuals. The most significant new criteria pertain to AI risk management, which largely focuses on the extent to which corporations have compliance mechanisms in place to ensure that AI technology is used only for its intended purposes by authorized and adequately trained personnel and detect and mitigate unauthorized or intentional misuse of the technology.
One of the DOJ’s primary concerns seems to be corporate insider’s use of generative AI to create elaborate and extensive false documentation that could be used to boost classic forms of corporate crime, such as accounting fraud and embezzlement. Recent studies have also shown that certain generative AI platforms can be tricked into providing detailed advice on how to commit complex crimes, including cross-border money laundering and evasion of international sanctions regimes. In short, generative AI can be used in multiple ways to make criminals smarter, more prolific, and harder to detect.
To prevent deliberate, criminal misuse of AI, legal departments must collaborate closely with accounting departments and IT to establish: (1) strict employee access restrictions for generative AI systems; (2) appropriate training on the use of generative AI systems and the legal and employment consequences of intentional misuse; (3) multiple layers of human review/verification when generative AI is used for accounting purposes; (4) prohibitions on the use of AI systems to authorize corporate payments; and (5) regular monitoring and review of employees’ use of generative AI systems.
Conclusion: A Collective Responsibility
The integration of generative AI into corporate life brings immense potential, but it also introduces risks that can have far-reaching consequences. Legal and compliance professionals are uniquely positioned to navigate these challenges, but they cannot do it alone. The responsibility of managing AI risks is shared across departments, from cybersecurity to marketing, and HR to public relations. By working collaboratively and developing strong compliance frameworks, companies can harness the power of generative AI while mitigating the risks. The future of AI is bright, but it requires careful, thoughtful governance to ensure that technology serves the company’s best interests — ethically, legally, and responsibly.
Sign up to receive all the latest insights from Ankura. Subscribe now
© Copyright 2025. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.