While there are many beneﬁts to using robotic process automation (RPA), it can also increase exposure to ﬁnancial crime, litigation, and regulatory breaches. Ankura and global law ﬁrm DLA Piper's ﬁnancial crime team look at how to stay ahead and keep safe.
By automating complex yet routine tasks, RPA oﬀers its users greater productivity, better accuracy, and improved eﬃciency. RPA technology is also relatively easy to conﬁgure and implement, well suited to working across multiple back-end systems, and typically brings a relatively quick return on investment. These factors have helped RPA to grow its footprint within corporates. According to Gartner, a world-leading research company, global spending on RPA software is estimated to reach $2.4 billion in 2022, up from $680 million in 2018. Powered by artiﬁcial intelligence (AI), RPA is set to tackle increasingly complex tasks that require greater cognitive skills. However, as with any rapid adoption, the beneﬁts can sometimes obscure the risks, particularly ﬁnancial risks and cybercrime. Ankura and DLA Piper have teamed up to highlight some of the ‘back door’ risks associated with RPA and how best to address them.
Taking Humans Out of Processes Can Result in Lower Issue Detection
While a human carrying out a routine task may spot an outlier and ﬂag it up as a potential fraud or compliance issue, a robot will only carry out the speciﬁc task it is programmed to perform. Companies should therefore be cautious about removing human oversight unless they are conﬁdent they have conﬁgured their RPA processes to identify all potential risk events.
Insecure Coding and Authorization Can Open the Door to Cyber Attack
From coding that allows the unencrypted storage and transmission of sensitive information to weak security credentials that fail to prevent unauthorised access to the RPA system, organisations may inadvertently be putting themselves at risk. The centralised design and the nature of tasks that are likely to be automated means that RPA systems have inbuilt access to business-critical infrastructure, from enterprise resource planning (ERP) and ﬁnance systems to email accounts. This could enable criminals to develop RPA-based malware and ransomware or conduct targeted attacks to extract sensitive information.
As AI-Empowered RPA Plays A Larger Role in Decision Making, Who Is Responsible?
The trend towards AI-empowered RPA is helping organisations to automate ever more complex workﬂows, but it is also exposing them to higher levels of risk. Examples of aﬀected processes include the onboarding of customers and suppliers as well as compliance activities. So, as bots play a greater role in the decision-making process, who is held responsible for those decisions? It is currently unclear how this area will be formally regulated but the key term being discussed is explainable artiﬁcial intelligence (XAI) — the need to clarify how machines make their decisions and the accuracy with which these tasks have been achieved. The UK Information Commissioner's Oﬃce (ICO) has oﬀered some indication of its evolving thinking on this subject. The ﬁrst point of note is around the need to avoid automation bias or automation-induced complacency, which is when human users rely solely on a computer decision-support system and stop using their own judgment. This ties in with Article 22 of the EU General Data Protection Regulation (GDPR), which states, “the data subject shall have the right not to be subject to a decision based solely on automated processing.” The second area of note is inbuilt bias introduced into an RPA system through poor initial data. Here, the need to test for these issues is critical and is leading the push for XAI, but there are obstacles in the move towards ‘explainability’. So-called black box systems make it impossible to discern the basis upon which decisions are being made and they are therefore coming under increasing scrutiny by regulators. Even a relatively transparent process may be diﬃcult to explain simply in terms of its decision making and accountability. Finding the correct balance between 'completeness' and 'interpretability' of an AI decision-making process is a diﬃcult task and, with regulation still evolving, needs to be carefully navigated.
Litigation and Regulatory Risks: Keeping Your Eye on a Moving Ball
Any business activity carries risk and litigation is always a danger. However, when technology, law, and regulation are all still evolving, the risks are higher and more diﬃcult to predict. This is very much the case with RPA and AI. If an ineﬀective RPA process mismanaged data or failed to satisfy compliance requirements, then clients may use legal proceedings to recover any associated losses. With concerns about the misuse of data and the ethical aspects of automated processes heightened, it is likely that regulators will increasingly be on the lookout for failings in RPA programmes. The consequences may not only be ﬁnancial penalties but also possible criminal liability for executives. To be more speciﬁc about regulators, RPA programs are likely to attract increased interest from agencies such as the Financial Conduct Authority (FCA) and the Information Commissioner's Oﬃce (ICO). As mentioned earlier, Article 22 of the GDPR establishes stricter conditions for AI systems which make fully automated decisions that have signiﬁcant eﬀects on individuals. We note that AI systems which only support or enhance human decision-making are not subject to these conditions. However, human input needs to be meaningful. A decision will not fall outside the scope of Article 22 just because a human has rubber-stamped it. Given these ambiguities, it is important to solicit advice to clarify which side of the human/automated decision-making line your processes stand. Detecting ‘red ﬂags’ and reporting lapses to the relevant regulator are likely to be more complex when using sophisticated RPA and AI technology and therefore need careful attention. A failure to address these problems could be costly in terms of ﬁnes and reputational damage.
There are several recent examples of ﬁnes by the FCA and ICO for failures in systems and controls, skill and diligence in business conduct, protecting customers' interests, and being aware of failings and issues:
- The FCA ﬁned Tesco Bank £16.4million in 2016 for failing to exercise due skill, care, and diligence in protecting its personal current account holders.
- The ICO ﬁned Equifax £500,000 for failing to protect customers personal information details in a 2017 cyber attack.
- Pregnancy and parenting club Bounty was ﬁned £400,000 by the ICO after breaching the Data Protection Act by illegally sharing personal information belonging to more than 14 million people.
These cases were pre-GDPR and so, post implementation, it is unclear how the new rules would be applied under similar circumstances.
What Next? Minimizing the Risks to Your Business
To fully enjoy the beneﬁts of RPA and AI, it is crucial that you thoroughly identify, monitor, and mitigate the accompanying risks. Key to that is carrying out strong robotic governance and process audits within your organisation, conducted by experts in RPA, cyber and ﬁnancial crime, data, and AI design. Once you have the protections and compliance procedures in place then make sure they have buy-in from legal, compliance, management, and technology teams. This issue is too big to be the responsibility of any one department, so make sure it receives airtime and action at a senior level.
In collaboration with Sam Millar, DLA Piper and Harriet Campbell, DLA Piper
© Copyright 2019. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.