Setting the Stage: Banking AML Fines and Failures
At the end of 2022, the world saw the largest anti-money laundering (AML) violations in history with companies garnering nearly $5bn in penalties industry-wide. The heftiest fines were seen by Danske Bank ($2bn), Credit Suisse ($234m), and USAA Bank ($140m). According to a report by the Financial Times, 2022 saw a 50% increase in AML fines compared to the previous year.
As the financial services industry looks to bolster programs to mitigate risks and reduce violations resulting in record fines, they do so in an era where Artificial Intelligence (AI) is growing exponentially. 2022 marks the year when AI became widely accessible to the public and is increasingly believed to potentially be as transformative to society as the Industrial Revolution. Within this Q&A, Danielle Tobb and Robert Reed explore the application of AI to combat AML in the context of the financial and insurance industries and look at the practical application of AI in preventing a new prevalent fraud scheme used by terrorists – crowdfunding applications.
Do you see any patterns in the AML deficiencies found within banks in recent years?
There have certainly been recurring themes of AML violations. In the Danske Bank and USAA Bank cases, both banks were cited with willfully failing to implement and maintain adequate AML compliance programs -- namely failing to adequately screen customers through know-your-customer (KYC) verifications and perform transaction monitoring (TM). As a result, both banks failed to report thousands of suspicious transactions to FinCEN. The failure to ensure their automated transaction monitoring systems thoroughly scanned transactions for money laundering and terrorist financing is among the most prevalent pattern among banks with AML failures in recent years.
The foundation of AML compliance is comprised of the 5 AML pillars, namely the implementation of effective internal controls. As banks have illustrated their failure to implement mechanisms to prevent and detect money laundering, it is crucial to pinpoint how they can tangibly enhance internal controls to prevent millions of dollars in fines and long-term reputational damage. One high-potential, and now practical, mechanism that has come to the forefront of the industry recently is the introduction of AI.
Can you explain AI in terms of its practical application in preventing money laundering and terrorist financing activities?
At a macro level, AI enables financial institutions to veer away from the traditional rules-based AML monitoring systems and implement sophisticated technology that can analyze far more extensive data in real-time to identify patterns/anomalies and flag transactions for further investigation. AI introduces enhanced capabilities including searching public records in seconds to not only find additional patterns but do so with far greater efficiency and consistency.
Currently, most financial institutions use basic statistical programs to perform transaction monitoring, which is driven by industry red flags, expert judgment, and basic statistical indicators. However, these rules-based programs often fail to incorporate the latest trends and regulations that apply to banks’ AML programs. On the other hand, AI can leverage behavioral and predictive analytics to build and deploy sophisticated algorithms that quickly adjust to new trends and patterns. These advanced programs can significantly improve TM by reducing false negative and false positive rates, and by sending higher-quality alerts up the chain for investigation.
In addition to transaction monitoring, AI algorithms can automate the myriad manual tasks involved with verifying customer identities, including potentially exposed persons (PEP) screening. This is a particularly important application as PEP have higher money laundering risks associated with their accounts. By employing AI to screen and find PEP, banks can more efficiently assess risks posed by the individual(s). Furthermore, AI’s ability to identify patterns that may not be detectable by humans enables financial institutions to prioritize alerts, identify related entities/individuals, and identify suspicious transactions more readily. This ultimately helps institutions reduce the costs associated with compliance reviews and enhance efficiency in responding to potential threats at unparalleled speeds.
How can AI help an institution that utilizes complex technology become more efficient while ensuring data privacy and security?
A common concern as it pertains to incorporating AI into institutions’ compliance programs is how to safeguard secure client information, such as personally identifiable information (PII), and critical information regarding escalations and investigations. The compliance team must work closely with the applicable cyber/data teams to ensure data privacy is considered when implementing any AI program. This is critical to ensure the risk of a data breach and vulnerability to other abuses; otherwise, wrongful applications are minimized. To further minimize privacy concerns, the organization should add AI to its data governance policy to ensure proper consideration has been given to AI security and monitoring.
It is equally as important for institutions to remember that technology, be it AI algorithms or any other software, can never truly replace human review. Technology should not and cannot be used as a stand-in or trump the supervisory judgment of seasoned professionals who have the industry knowledge and experience to make more informed decisions regarding escalation or further review. Management must remember this guiding principle as they establish AI policies and train employees on new AI platforms. However, data analytics and AI can and should be leveraged to create efficiency and improve quality; especially when it comes to transaction monitoring.
A further key to AI implementation is the anticipation of the regulatory inquiries and concerns that are likely to arise. Institutions must balance governance with innovation when it comes to adopting AI. Institutions should ensure they avoid the “black box” approach, meaning management and employees should be able to explain how AI is applied to their AML compliance program and validate its accuracy through appropriate testing. These mechanisms need to be fully transparent as institutions bring regulatory and audit bodies along for the ride.
One way to ensure the transparent use of AI is for management and employees to be actively involved in the development and implementation of the software to ensure familiarity with the algorithm parameters and capabilities. This can help safeguard against data confidentiality and inconsistency risks as well as help employees better understand the decision-making processes.
Do you believe financial institutions will fall behind if they do not adopt AI into their TM and SAR processes?
There is not a one-size-fits-all answer to this question nor is it a zero-sum game. It is possible for institutions not to adopt AI and still have a mature and adequate KYC, TM, and Suspicious Activity Report (SAR) decision-making process. However, institutions that choose not to incorporate AI into certain processes will never run as efficiently or analyze as much information as institutions that do adopt AI.
However, adopting and integrating AI requires time and investment; it is not a turnkey process. Some bottlenecks exist – namely system development time, integration time, and human challenges in understanding the programs on a practical basis. Hence, institutions that decide not to adopt AI will bypass the strenuous time and resources that adopting institutions must take to vet AI software(s), train employees and management to use the programs, and evaluate and tune the programs. Although it takes time to develop and fully implement AI systems to their fullest extent, once an institution does so, the scale of data the AI machine can process and decide on is far more rapid than any other traditional methods. Once integrated, the institution will be able to run at substantially lower costs and with greater efficiency. Regarding adopting AI, the December 2018 Harvard Business Review article "Why Companies That Wait to Adopt AI May Never Catch Up" by Vikram Mahidhar and Thomas H. Davenport1 said:
“the winners may take all and late adopters may never catch up.”
I tend to agree that adopting AI to conduct transaction monitoring activities, SAR escalation decisions and Enhanced Due Diligence (EDD) will give an institution a distinct competitive advantage if used properly. In other words, the AI program must be tuned to meet your institution’s needs or risk profile and cannot be adopted for the sake of rapid adoption. It is one thing to adopt AI, but it is another thing to successfully adopt AI to perform certain analysis that was historically primarily human-led. In short, I would advise financial institutions to expeditiously take tangible steps to evaluate where AI can be implemented to cut down on human time-consuming tasks and strongly consider adoption. This is viewed through the lens of long-term benefits to the institution rather than short-term costs.
Case Study: The Application of AI to Detect Crowdfunding Fraud
In recent years, the threat of the abuse of crowdfunding platforms by terrorists and bad actors has garnered national attention. In December 2022, a news story, "Terrorist Financing Through Social Media and Cryptocurrency"2 about an FBI investigation revealed that ISIS raised money through crowdfunding campaigns to siphon back to their organization while masquerading as humanitarians. Now more than ever, terrorists have learned to exploit crowdfunding platforms due to anonymity and ease of manipulation. Given this new global threat, it is necessary to explore how AI can be used by crowdfunding platforms to ensure terrorist financing campaigns are not conducted on these applications.
How can AI be employed to detect AML risks associated with crowdfunding platforms?
AI can play a critical role in the due diligence process that crowdfunding platforms must perform. AI can be programmed to evaluate the identity of individuals running the campaign through monitoring of related social media accounts and campaign communications to detect red flags, determine how many steps away owners are from sanctioned individuals or organizations, and prevent campaigns that may be raising funds for illicit purposes. AI can also be programmed to conduct EDD reviews to evaluate factors that are indicative of potential terrorism financing such as campaigns that do not impose transaction limits; allows individuals to withdraw cash; allows virtual currency payments; allows account holders to transfer funds to each other; and allow delayed contributions to unspecified projects.
Additionally, AI can be “taught” to pick out fraudulent campaigns through programming that includes factors like “evidence of contradictory information, a lack of engagement on the part of donors, and participation of the creator in other campaigns.” The authors of a study performed (by researchers at the University College London, Telefonica Research, and the London School of Economics) to uncover indicators of fraudulent crowdfunding campaigns, found that “fraudulent campaigns are generally more desperate in their appeals, in contrast with legitimate campaigns’ descriptiveness and openness about circumstances,” according to a July 2020 VentureBeat article titled "Researchers propose AI for detecting fraudulent crowdfunding campaigns" by Kyle Wiggers.3 Hence, there are already significant factors found through research that showed a crowdfunding platform may have been usurped by terrorists or bad actors, which can be used by existing platforms.
Of course, it should be noted that building an AI mechanism to detect fraudulent campaigns is not a one-time, quick-fix implementation. Building a preemptive and accurate mechanism means the platform should be continuously updated for new findings, regulations, and patterns uncovered. However, it is a significant positive step in thwarting the misuse of crowdfunding platforms by terrorists and other bad actors to safeguard the integrity of our financial system.
What advice would you offer to financial institutions as it pertains to evaluating the many AI platforms out there, and training employees regarding these complex AI technologies and procedures?
The bottom line is that there are copious AI platforms in the marketplace; however, the first step before diving deep into the sea of options is to define the institution’s needs and goals. What types of tasks are crucial for AI to perform? What are the prime objectives in implementing AI over existing systems? In evaluating these platforms, an institution should consider how well it will be able to integrate into existing systems and if the AI platform can grow with its business’s needs. Most crucially, institutions should choose a platform that is scalable and can be flexibly implemented as the organization evolves. Before committing to a platform, the software should be evaluated on a sample current data/customer set to gauge the software’s effectiveness, employee friendliness, and compatibility with existing systems. To understand the complexities and challenges of AI, it is recommended to seek the advice of an expert in AI implementations, such as our forensics experts at Ankura. We will walk you through various steps to identify and prioritize critical tasks to start your journey. We also have the experience to understand what regulators are focused on and can assist in walking regulators through these conversations so that there are no surprises at your next exam.
Equally as important to choosing the best AI software to achieve an institution’s goals is ensuring employees and senior management are properly educated on how to deploy the software and work through inevitable bumps in using the system. AI has a notoriously steep learning curve, so setting expectations and a proper expectation of exploration as opposed to perfection is recommended. It is crucial to give employees a chance to test the software and interact with the application, allowing them to gradually understand its application to their day-to-day work. In setting up an employee training program, offering a comprehensive educational training program as the software is implemented should be top-of-mind. Offering a range of introductory courses and resources such as online training videos that employees can view on-demand is important to help ensure continuous learning and make the programming accessible to employees who may be unavailable for scheduled in-person training sessions. Above all, it is important for management to understand that it will take time for employees to get a full grasp of the AI platform and that patience is key to the long-term success of the AI initiative.
Ankura has built data analytics and AI tools that sit on top of institutions’ existing infrastructure that can perform these time-consuming tasks, customized to your policies and procedures. Our data analytics tool brings in both segregated and narrative data from publicly available records to provide value-added analysis into institutions’ alerts, unusual activity, and EDD reviews. Ankura has flipped the 80/20 equation so that your organization spends 80% of its time in the decision-making processes instead of 80% of the time pulling data for analysis. This allows your investigators to make timely, well-educated decisions to secure the organization and financial system. Our AI aids the organization by writing both the EDD reports and SAR narratives based on your organization’s requirements for quicker review. It also provides your investigators with consistent analytical data and reporting that regulators like to see during exams. Contact us today via email firstname.lastname@example.org to discuss how Ankura can help.
© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.