Rising APP fraud highlights need for stronger detection of mules 

Rising APP fraud highlights need for stronger detection of mules 

Perpetrators of authorised push payment (APP) fraud – where victims are conned into sending their money to criminals – need multiple bank accounts to launder the proceeds of their crimes. One way to stop them is to be more focused at the onboarding stage and fight fire with fire by using artificial intelligence to detect signs of mule accounts. 

By Kathy Gormley, AML Product Manager at Resistant AI 

Authorised push payment (APP) scams – where victims are fooled into sending money to fraudsters – are on the rise, and financial institutions are under pressure to stem the flow of ill-gotten gains from these crimes. Perpetrators of APP fraud need multiple accounts to launder their proceeds and banks – particularly the digital-only challenger firms – would do well to be more vigilant at the customer onboarding stage, rather than waiting and relying on transaction monitoring to spot money mules.

APP fraud is a significant problem: the Payment Systems Regulator notes that APP scams accounted for more than 40% of fraud losses in the UK in 2022. It’s a worrying trend, particularly in light of the upcoming APP fraud reimbursement requirements being introduced in 2024.

All banks are faced with an onslaught from organised criminal gangs attempting to open accounts so they can launder money. However, the problem is more noticeable at new digital banks because they have a higher proportion of new accounts to their overall customer base. Also, digital banks have innovated convenient, low-friction digital onboardings. 

This slick and seamless customer experience that challengers offer, make it easy to open accounts – and this convenience is attractive to fraudsters. 

This account opening is being attempted with the latest artificial intelligence (AI) tools and is being done at scale. Mass serial fraud is a serious threat and criminals are bombarding banks’ onboarding processes to test for even the smallest vulnerabilities and opportunities to exploit. At Resistant AI, we are certain these attacks are being automated. In the millions of documents analysed by our engine we have seen attempts that use the same document templates repeatedly (both fraudulently created, as well as real ones). We have identified, for example, a single passport being used over 2500 times in a 20 day period. 

Likewise, utility bills and bank statements are being amended from templates with changes that are not visible to the human eye. If financial institutions can detect and stop these attempts at the onboarding stage by using sophisticated technological countermeasures, they will reduce their financial, operational and reputational risk early on. In terms of the reputational risk, anecdotally we are hearing that the rise in fraudulent account opening, coupled with the upcoming APP fraud reimbursement requirements is undermining start-up challenger banks and fintechs’ ability to raise new rounds of funding from investors. For non-UK challengers and fintechs it’s also making entry into the UK market less attractive.

Money mule accounts can be detected at the onboarding stage, and with effective transaction monitoring once the accounts have been opened. The Financial Conduct Authority (FCA), however, noted in October that some firms were doing relatively-few checks at onboarding and relying on transaction monitoring instead. It recommended they take ‘robust steps’ to detect potential red flags at the onboarding stage. 

One way to do this is by using AI to check the submitted documentation – and authenticity can be assessed in over 500 different ways. Document templates can be purchased online and amended hundreds of times. AI can detect, for example, where the same creases are appearing in a series of documents that have been photographed. Another clue is that the background in the photo is exactly the same as other attempts. Or perhaps an image has been taken with an iPhone 11 but sent with another device. Maybe multiple account-opening attempts have been made from the same IP [internet protocol] address, and the submission times are clustered for a particular time zone. Each of these isn’t necessarily a red flag, but together – with the hundreds of other signals – create a network of dots that can be connected. 

In its review of best practice to detect and prevent money mules, the FCA recommended that firms invest in machine learning to reduce the inherent risks of static rules-based systems. At Resistant AI we believe that the traditional rules-based approach is not fit for purpose, especially in an age where AI tools are freely available to fraudsters. Machine learning can respond and adapt as threats change, rather than waiting for new behaviour to emerge and writing new rules after the fact. In tackling rising APP fraud, we need to fight fire with fire and use AI to prevent criminals using money mule accounts. 

Share:

Posts you may like

Send Us A Message



Follow us on Social Media

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles


By checking this box, you acknowledge that you have read and agree to our [Privacy Policy] and [Terms of Service].