0,00 USD

No products in the cart.

Thursday, November 6, 2025

Shop

spot_img
0,00 USD

No products in the cart.

Ethical considerations for AI in Financial services

Katharine Wooller
Katharine Wooller

Katharine Wooller, Chief Strategist – Financial Services, Softcat plc, looks at the implications of emboldening AI across the financial services spectrum.

As firms look to heavily invest in AI, regulators and legislators worldwide are grappling with creating rules for a truly disruptive technology that comes with novel risks.  The foundation stone of good policy is to protect the consumer and prevent systemic risk, whilst, at the same time, embracing innovation and competition.  Priorities which can, at times, make uneasy bedfellows.   

Any financial services business should be focused on reducing cost and risk, and whilst AI can drive huge benefits, there are very specific concerns, particularly around the way AI uses data.  Indeed, the EU AI act is explicit in highlighting financial services as “high risk”, particularly when used in relation to lending-decision making, pricing insurance, and hiring. 

There are a multitude of ethical considerations when deploying AI for financial services. Most cited, is the risk of bias and discrimination – AI is only as good as the data it is trained on.  

Systems can become racist, homophobic, misogynistic, ageist, or ableist.  The potential reputational damage of having decision making that is contrary to the protected characteristics would be rightly gargantuan, and we are only in the early stages of being able to audit, and indeed fix, models.  

Imagine an AI model supporting a hiring process; let us say it is scanning CVs for a role, and has ingested the CVs of those already in this specific team, and finds most of them are, a particular gender, hair colour, or are from a particular town – will the AI model prioritise hiring more of the same people?  

Seems unfair on those who are outside of these characteristics, and maybe fabulous candidates – not to mention a disaster from a DEI perspective.  

There is a lack of transparency in AI systems, knowing how or why they decide, can be difficult to identify.  This is particularly difficult from a culpability point of view, if a system is faulty, the designer, or senior manager ultimately responsible, can be held accountable.  Customers and regulators seek clear, explainable reasoning for high-stakes financial decisions and will rightly take a dim view if we simply blame an algorithm for being a “black box”.  

Privacy concerns are often highlighted in relation to AI, especially within financial services, where a large language model relies on hugely sensitive information.   Misuse, unauthorized sharing, or poor data protection practices, can lead to privacy violations or identity theft.  Without strict safeguards, AI systems can enable invasive surveillance, or re-identification of supposedly anonymous data.  

There is also increasingly a backlash against what is seen as the over concentration of power within a few large tech companies where consent to use personal data for AI is in practise compulsory, and concerns that as a result AI is at risk of misinformation and manipulation. 

AI can be contrary to swathes of ESG initiatives; there is a huge environmental cost to running large language models.  Socially, AI is expected to displace millions of jobs, often for those who already have the least financial resiliency. 

For these reasons, there is already some discomfort with the pace of AI adoption, and AI can be considered somewhat taboo.  I can see an environment where it is best practise to declare the use of AI both within a work environment, and for consumers engaging with financial products and services. 

How do mitigate these ethical risks whilst allowing firms to exploit this hugely potent technology?  Thankfully there are guard rails that can keep firms away from the ethically grey zones.   In my “day job” I have a bird’s eye view of innovation in 2500 firms and see a huge variety of AI policies across the industry!  Thankfully, many firms are adopting “best practice”. 

 Firms at the forefront of AI innovation have taken the time to develop a sophisticated AI strategy and have an empowered AI committee with senior stakeholders from all parts of the business.  They focus on strong AI governance, and clear ethical boundaries to oversee AI design and deployment.   

They start with strong data governance, and focus on ensuring fairness and bias mitigation, by using diverse and representative datasets from the get-go, and conducting regular bias audits.  They prioritise transparency and explainability, by defining accountability and requiring “human in the loop” for high-risk activities.   

They take a broad-ranging view of deploying AI in a socially and environmentally responsible way, for example in redeploying teams facing redundancy, upskilling their work force to use AI, or managing power usage.  Crucially, they have a whistle-blowing process for colleagues to flag potential concerns on the use of AI.   

There will always be a tension around a new disruptive technology, particularly between regulators and fostering innovation.  AI is a once in a generation opportunity to drive efficiencies, and anyone using will have to navigate the fine line between preventing harm and fostering choice and competition for the consumer. 

  

  

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here


- Advertisement -spot_img

Latest Articles

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles
By checking this box, you acknowledge that you have read and agree to our [Privacy Policy] and [Terms of Service].