How did we get here?
Since 9/11 financial institutions have made enormous investments in their anti-money laundering compliance programmes, but they are still faced with manual processes and complexities in complying with financial crime regulations. “One would be hard-pressed to suggest that banks are ignoring the need for better customer due diligence,” KPMG reported in March 2019. “Indeed, according to a recent Forbes article, some banks spend up to US$ 500 million each year in an effort to improve and manage their KYC and AML processes. The average bank spends around US$ 48 million per year. In the US alone, banks are spending more than US$25 billion a year on AML compliance.”
Much of the growth in costs is driven on the one hand by fear of fines for non-compliance (US$24 billion in non-compliance fines since 2008, but the cost to reputation can be higher) on the one hand and, on the other hand, by the need to address the high false-positive rates resulting in significant remediation efforts that rules-based systems generate.
Even though, financial institutions are the ones being fined they are by no means in this alone. There is an ecosystem of consultants, research companies, software vendors who also need to be held accountable to their clients.
The reality is that few of the first generation AML systems were built to fight crime! These systems were mostly the by-product of business intelligence (BI) software products (as opposed to AML or financial crime solutions) that were originally created to provide insights into corporations’ past performance. Many of these systems are programmed using code that dates back to the eighties!
After 9/11, many vendors seized the opportunity to position their BI software as AML compliance solutions. Consequently, financial institutions find themselves combatting sophisticated fraud, money laundering, and terrorist financing schemes with outdated BI software using code that dates back to the eighties!
This is analogous to competing in this year’s Formula One driving Jackie Stewart’s Tyrrell or Alain Prost’s Renault.
No wonder, the expectations of many financial institutions’ compliance departments haven’t been met. With an exceptionally high rates of false positives, i.e. somewhere between 75 and 90 percent of alerts. In fact, many financial institutions’ compliance programs have continuously felt under-resourced compared to the volume of alerts and reports they must review while trying not to disrupt good business. Thus, financial institutions felt compelled at least those that could afford to do so built offshore entities taking advantage of lower labour costs to maintain their systems and manage their KYC alerts.
With all the attention on false positives, false negatives have a much better chance of slipping through the workflow.
The risk-based approach was supposed to help financial institutions to get a grip on the high-rate of false positives by profiling customers and segmenting them based on their risk exposure. However, clustering customers into segments/categories based merely on products, channels, transactions, geographies, and industries and then building generic rules with arbitrarily selected thresholds was a simplistic approach, to say the least, and somewhat naïve from regulators.
Moreover, all this is happening at a time when legislation has made it easier for customers to swap from one financial institution to another. Rather than risk annoying good customers by constantly asking intrusive questions as a result of false alerts, banks have often further refined the number of segments … to the point that they become unmanageable.
The light at the end of the AML tunnel
Everyone in the AML ecosystem has come to realize that a better approach is needed, especially when it comes to verifying false positives. Basically, the whole industry is banking (pun intended) on big data, artificial intelligence (AI) and machine learning (ML) to help simplify complex processes and automate repetitive tasks. Therefore, the expectations for AI are overwhelmingly high according to PriceWaterhouse Coopers estimates, the contribution of AI to the gross world product by 2030 to be around 14 billion euros.
AI has the potential to remove the chains from AML compliance staff allowing them more time to deal with non-routine events and complex cases as well as having better information through a cleaner, more traceable process to make objective decisions.
Of course, machine learning models can process tremendous amounts of data, but ML systems still need to learn the difference between a false positive and a false negative and that in real-time. But there are challenges that need to be sorted out.
As with every new technology wave: CRM, business intelligence, big data, predictive analytics, or artificial intelligence, etc. technology companies can’t resist the temptation to sprinkle the latest buzzwords on every bit of their software like fairy dust. This gold-rush atmosphere also has its downsides. In March the Financial Times reported that 40 percent of European AI start-ups do not actually use AI programs in their products, according to an investigation by the investment company MMC Ventures.
First, there simply isn’t enough well-structured data at most companies that can be used for teaching these ML / AI models. IBM’s Watson, named after the company’s first CEO, has learned this the hard way. Originally developed to answer questions using natural language processing (NLP), and then as a system to assist in the diagnosis of cancer, as of June 2017, the Watson artificial intelligence platform had been trained on six types of cancers which took years and input from a thousand medical doctors.
A second, challenge is that criminals are always adjusting and trying new schemas and a third is that the financial services landscape is itself changing continuously, leaving ML and AI platforms with a real-time knowledge gap.
So, caveat emptor: these systems require months and in many cases years of laborious training, and a lot of support by expensive compliance experts and data scientists. The experts must feed vast quantities of well-structured data into the platform for it to be able to draw meaningful conclusions, and these conclusions are only based upon the data that it has been trained on, i.e. what happened in the past. An implementation project will involve:
- Learning the transaction behaviour of similar customers
- Pinpointing customers with similar transactions behaviour
- Discovering the transaction activity of customers with similar traits (business type, geographic location, age, etc.)
- Identifying outlier transactions and outlier customers
- Learning money laundering, fraud, and terrorist financing typologies and identify typology-specific risks
- Dynamically learning correlations between the alerts that produced verified suspicious activity reports and those that generated false positives
- Continuously analysing false-positive alerts and learning common predictors
For the most part, financial crime will be driven by advances in technology and this marriage of regulation and technology is not new in itself. However, with the continual increase in regulatory expectations, the staggering levels of cyber-attacks against financial institutions and the disruption of new instant payment initiatives will not make it easier on people working in compliance.
In brief, these innovations address many gaps in today’s financial crime programmes by improving automation in the detection of suspicious activity, which would be a significant move from monitoring to preventing financial crime while being more cost-effective and agile. Financial institutions that have already been down this path before and have been disappointed would be wise to start with small-scale pilot projects with limited scope, using agile software delivery methodologies.
And above all, they need to invest in data quality as it is a key component of any successful financial crime program. High-quality data leads to better analytics and insights that are so important not only for accurately training ML and AI models, but also to drive better decisions.
Paul Allen Hamilton and Volha Miniuk
Visit us at: AML Knowledge Centre (LinkedIn)