
By Katharine Wooller, Chief Strategist, Financial Services, Softcat plc
Financial Services businesses are a “sitting duck” for AI. Since time began (the first recorded bank is generally considered to be the Banca Monte dei Paschi di Siena in Italy, founded in 1472), banks have been quite literally, a honeypot for fraudsters. AI provides almost unlimited capacity for misuse, and of course the potential gains are huge.
According to the Bank of England, UK banks hold over £3.2bn of deposits, and the total value of digital payments per year is around £108bn, Fraud is, unsurprisingly, big business, often highly organised, and harnessing cutting edge technology. Whilst any financial services business will seek to have robust cyber security, the sheer scale and variety of potential attacks is hugely challenging, and often firms are tasked with trying to stay ahead of highly motivated and sophisticated criminals.
As an idea of magnitude, Visa, the busiest processor of retail payments, estimates it sees around 71,000 cyber-attacks each day. The stakes are high, with bad actors often originating from outside of the UK, meaning that law enforcement is often playing “wackamole”.
The fraudsters dream toolbox
Let us be clear, there is a plethora of financial crime, and the use of AI for financial fraud can be extremely sophisticated.
The addition of AI technologies significantly increases effectiveness of cyber-attacks, leveraging advanced technologies to deceive and exploit individuals and institutions. Whilst human beings are the weak link in most attacks, the rise of AI and quantum computing offers a plethora of sophisticated new tools for those intent on doing harm. Deepfake attacks and voice cloning offer huge capacity to impersonate users, particularly senior leaders, to make unauthorized payments. In practise there are numerous industry specific ways that AI can be exploited, including for payment fraud, credit card fraud, identity theft, and transaction laundering.
LLM can be used to hone attacks, providing highly personalized phishing attempts, and in analysing and manipulating user behaviour to trick individuals. A fraudster who knows the name of your teenage kids, when and where they are travelling overseas this month will do a much better job of convincing you they need that emergency £100 to an account you don’t recognise!
Whilst of course fraud is already occurring on a professional and industrialized basis, AI magnifies this and can be used to create AI powered bots to carry out automated fraud, such as account takeovers. Whilst fraudsters can be anywhere in the world, sitting on their sofa or in a warehouse of fellow ill-doers, the AI provides a scalability hitherto unseen.
AI-driven software can autonomously scan, exploit, and infiltrate networks without human intervention, and, even more worryingly these tools can adapt to the defences they encounter, making them highly effective. A great analogy of this hostile guest in your IT systems is an antibiotic-resistant infection rampaging through a hospital population. Unwanted, potentially devastating, and very hard to stamp out.
A smorgasbord of harm: key attack vectors
The AI itself can also be vulnerable, and regulators worldwide are grappling with the unique risks of using AI in financial services.
AI generally relies on openly available data, and with this comes significant opportunity for harm. Data poisoning is the corrupting of data used to train AI models, leading to incorrect or biased outputs that can be exploited for nefarious means. We are fairly early on in our ability to audit AI models, and much still needs to be done to ensure that the use of AI cannot be manipulated against certain communities, obvious examples being race, gender, sexuality, political affiliations etc.
All out adversial attacks can manipulate AI models to intentionally cause the model to make incorrect predictions. Model inversion allows attacks to use AI models to infer sensitive information about training data, essentially taking the answer and guessing the question. This, taken to its extreme would allow a fraudster to steal a model, replicating a proprietary AI model by querying it extensively and using the responses to train a similar model.
As a result, and unsurprisingly, some organisations are very protective of their data being used to train AI and are opting for private cloud solutions.
AI specific cyber security measures
Luckily, given the apparent omnipresence of AI, and the huge levels of investment in it, significant thought has been given to specialist protective measurements. Any firm which has AI in any form must have AI specific cyber policies, to ensure the accuracy, integrity, and trustworthiness across all phases of the AI lifecycle, from development and testing to deployment and operation.
Some best practices include that AI models and data must be encrypted to present unauthorised access and tampering. Restricting access to AI models and data to only those who need it can reduce the risk of insider threats and unauthorized access. Implementing multi factor authentication for systems that interact with AI can add an extra layer of security, making it harder for attackers to gain access and employing digital signatures to authenticate trusted revisions, tracking data provenance, and leveraging trusted infrastructure.
Conducting regular and through data integrity checks on the data used for training the model will reduce the risk of data poisoning attacks. Having robust data protection strategies throughout the entire AI system lifecycle and developing incident response plans that include AI-specific scenarios, in case the worst does happen.
AI: A political hot potato
Interestingly recent elections in the UK and US have shown that AI is now a politicised issue. As with any new technology there is much head scratching and hand wringing over how fast it should be adopted, and whether it should be regulated, and by whom.
There are numerous political considerations, particularly in relation to hostile nations deploying AI to develop unconventional weapons, such as nuclear and bioweapons, automated hacking tools, and AI-optimized malware.
There is a danger that nations with advanced AI capabilities can gain a strategic advantage in national security, particularly when AI is used for autonomous warfare, intelligent cyber defence, and surveillance.
There is much neurosis around whether first world countries are losing out in the AI arms race, whether that is the potential for economic growth or the ability to influence global power dynamics, exemplified by some nations imposing restrictions on the export of advanced AI technologies to rival countries (for example the US introducing stringent regulations to control the export of artificial intelligence technology, targeting countries like China, Russia, Iran, and North Korea).
AI Apocalypse? A Systemic risk
Regulators have the unenviable task of trying to balance on the one hand, fostering innovation and promoting competitiveness, and on the other hand protecting citizens and market integrity. The question must be, what is the worst-case scenario? There is huge potential for harm, and at worst, AI can introduce systemic risk to a financial system, potentially undermining its stability.
There is a concentration risk; if AI’s adoption is widespread, the financial services industry can create a monoculture, with too much reliance on AI models, and if a small number of models dominates, as is currently the case, a flaw or failure of one model can have widespread ramifications. Alongside the AI specific cyber risks, there is significant operational risk if an AI system that is business critical can be manipulated and or damaged.
AI can potentially be used for market manipulation, either through intentional malicious use to create mistrust in a financial system, or unintentionally, if too many firms rely on similar data and models, can lead to herding behaviour, with too many institutions making the same decision simultaneously, artificially amplifying market volatility.
With AI providing potential savings in terms of automation, there is a potential for over-reliance, reducing human oversight and intervention, this lack of accountability is particularly exacerbated when regulation is still forming, there is arguably a misalignment of the potential of AI technology and current regulatory standards. There is a risk, therefore, that AI can be used to evade regulation, in the absence of applicable rules.
These risks highlight how AI can negatively impact financial systemic risk, and underline the need for robust regulatory frameworks, continuous monitoring, and the development or resilient AI systems and controls.
A new frontier of ethics
It would be remiss not to flag some of the ethical concerns around the responsible use of AI. The ESG agenda, rightly, is front and centre for most large FS firms, and indeed in the UK a whole third of the name of our regulator is related, critically, to the industry’s “conduct”. AI presents several ethical challenges that need to be addressed to ensure it is not contradictory to a firm’s moral compass.
Much has been written around bias and discrimination and left unchecked, AI systems can perpetuate, and event amplify existing biases present in their training data, and at worst can even cause human rights abuses.
AI relies on cast amount of data, raising concerns around privacy and informed consent. This is particularly challenging given the “black box” nature of models, often making it difficult to understand how a decision has been reached. This lack of transparency can hinder trust and accountability, particularly is a system operated autonomously without human intervention, and is a legal and moral minefield in terms of regulatory compliance.
Moreover, the amount of data and energy required for AI models is rightly under scrutiny and is often at odds with future environmental goals. There are some societal concerns around job displacement, and if AI automates even half of the tasks, it has been postulated as having a use case for, there is significant economic disruption on the horizon. All of the above make for complex moral conundrums, and potentially risk significant adverse press, and as with any nascent technology, there is much in flux as we advance through the cycle of adoption.
AI: Threat or remedy
Whilst AI clearly has numerous elements of risk that need to be managed, it would be disingenuous not to appreciate that the technology can have significant application in reducing financial crime. Indeed, some of the most viable uses cases, with significant return on investment relate to AI detecting fraud, and I would expect firms to invest heavily on projects in this function in the near term.
Many payment firms are on record as leveraging AI to combat fraud, such as Visa, Mastercard, American Express, and Paypal. Large banks, also, are enjoying competitive advantages of AI, for example JP Morgan for payment validation screening to reduce false positives, HSBC monitoring transactions to detect suspicious activities in real-time, and Barclays analyse customer behaviour to detect anomalies that may indicate fraud.
A number of insurance firms are on record as using AI, a perfect fit given how data rich these businesses are, with Met Life, Swiss RE, AXA and Zurich having live projects.
Is the future already here?
It is very hard to tell if AI right now a net contributor to financial crime; I’d love to hear from anyone who thinks they can reliably measure this. For financial services firms AI is both an opportunity and a risk, and for all firms the stakes are high.
It has been said for a while now that data is any business’ secret ingredient – and we are probably drowning in it somewhat, and with 90% of the world’s data being less than two years old, extracting insight and value is the challenge for our ages.
It is likely that the next chapter for financial services, and the economy more broadly, should be titled “deus ex machina”. Indeed, on this basis, investment in AI in fintech is expected to rise to $61bn by 2032. And it shows no signs of stopping any time soon.
Author Bio:
Katharine Wooller is Chief Strategist, Financial Services at Softcat plc, a FTSE listed technology company. Located in Marlow, Buckinghamshire, Softcat is a provider of software licensing, hardware, security and related IT services.