What fintechs need to know about the evolution of phishing cyberattacks

What fintechs need to know about the evolution of phishing cyberattacks
richard latulip
Richard LaTulip

By Richard LaTulip, a Field Chief Information Security Officer at Recorded Future.

Phishing is a major concern in the UK. It’s form of attack that often attempts to manipulate people into revealing sensitive information or downloading malware, and poses a significant risk for fintechs. The UK government’s ‘Cyber security breaches survey 2025’ shows phishing attempts were targeted at 85% of the businesses that experienced a cyber breach or attack in the past 12 months.

This type of threat, according to the government’s survey, is the most prevalent type of cyberattack or breach, and is also recognised as one of the most disruptive. Further data from the survey reveals 48% of finance or insurance businesses identified a cyber breach or attack in the last 12 months, which is higher than the average of 43%.

The data implies that fintechs face a realistic risk of phishing attacks. However, it’s not just the prevalence of phishing that’s the main concern for businesses, it’s how the threats are evolving. Phishing is becoming more and more personalised, with technologies including Artificial Intelligence (AI) being exploited by criminals to make attacks increasingly targeted and plausible. This is changing the threat landscape and fuelling ‘spearphishing’.

Personalised and plausible phishing attacks

Fintechs are renowned for embracing digital technologies to drive innovation and transform financial services. The same pioneering ethos that’s inspired the development of products and services has also been applied to cybersecurity – a key reason, inadvertently, why phishing attacks are favoured by cybercriminals.

Threat actors are well aware of how fintechs utilise the likes of blockchain, AI and big data analytics to enhance security protocols. Businesses have robust systems and processes in place that regularly assess software vulnerabilities to mitigate cyber breaches. They also run effective monitoring that quickly identifies any suspect activity and triggers responsive measures. Attempting to beat technology and software is becoming increasingly complex, and if criminals do manage to gain unauthorised entry, they know they face a race against a fast-ticking clock before they are detected and restricted.

Gaining genuine, trusted access credentials can prove a quicker and more effective route for bypassing robust security defences, which is where phishing attacks come in. Criminals see humans as the weak link in a chain of sophisticated cybersecurity measures, believing staff can be deceived into unwittingly sharing sensitive information including passwords, or downloading malware.

Common phishing attacks have involved emails, phone calls and text messages. In such cases, the threat actor will pose as a legitimate source – someone the intended victim knows or trusts. This may have been an email appearing to originate from a senior colleague requesting sensitive information, which seems logical and believable. Or it could be a message from another business function like HR or IT, which alerts the recipient to a supposed issue and asks them to verify details. These attacks can be easy to miss, especially when they occur during a busy working day. Cybercriminals realise employees are likely to be contending with multiple tasks and deadlines, and prey on this to exploit people when their focus is elsewhere.

Spearphishing has the same modus operandi as phishing but is even more targeted and sophisticated. A common tactic in spearphishing campaigns is embedding malicious macros in Microsoft Office documents. Attackers often lure employees into enabling macros by claiming the document is protected or requires content to be enabled for proper viewing. Once macros are activated, the document executes code that can install malware, exfiltrate data, or give attackers a foothold into the network.

Threat actors are increasingly turning to artificial intelligence to make these attacks even more effective.

AI-Powered Spearphishing

Cybercriminals are exploiting AI to make spearphishing attacks extremely personalised, making requests for user credentials seem increasingly genuine. AI’s capabilities are also being used to make attacks scalable.

Generative AI is frequently used to generate thousands of unique, native-language lures quickly. In this way, sophisticated tech is exploited to create vast volumes of scam emails that seem legitimate, because the language appears more authentic and less suspicious. For example, AI can deliberately include typical spelling and grammatical errors in a message, mirroring mistakes humans commonly make. Likewise, colloquial terms and slang can be incorporated into messages, making them seem like a person has written them.

AI can also be used to harvest and analyse data about the target of the attack, as well as the party that’s requesting information. Insight about individuals can be scraped from social media, as well as other readily available sources of information, including biographies on company websites. Information is used to create emails that feature genuine and relevant references, making an email from a supposed colleague seem even more authentic. In this sense, the AI is enabling the threat actor to more accurately impersonate a trusted source.

The voice generating and changing capabilities of generative AI are also being used by criminals to effectively imitate business support services. An example of this could include an IT helpdesk, which contacts a member of staff and tricks them into divulging confidential and sensitive information. Realistic voices sound human and build trust. It’s an evolution of a social engineering scam, creating an eagerness and urgency to resolve a problem that doesn’t exist. IT problems are frustrating at the best of times and the desire to fix issues can mean employees are less cautious about security risks.   

An effective way for fintechs to strengthen defences against AI-powered spearphishing is educating employees and raising awareness of how threats are being personalised. A simulated, controlled attack exercise can practically show employees how criminals are using AI and create understanding about how risks are evolving.

It also important to enhance the resilience of cybersecurity through faster threat identification and sustained intelligence. The capabilities of AI are enabling threat actors to increase the scale of spearphishing, so attacks are more frequent and varied. Building knowledge of the threat landscape can help to prioritise which attacks pose the most realistic risks of a breach and avoid defences becoming overwhelmed. Informed decisions can be made and proactive steps taken to drive preventative action, helping fintechs to stay ahead of potential breaches.  

Impersonating trusted brands

Phishing techniques are also evolving to spoof widely trusted and well-known brands. This can involve the creation of a well-designed website that looks like an actual brand and ‘typosquatting’, where criminals register slightly misspelled versions of legitimate websites.

The typosquatted domain can be used in email addresses during spearphishing to make the message appear as though it originates from an official, trusted source. The phishing email is also likely to include a link for the misspelled domain, with the recipient directed to the site where they will be asked to share sensitive data.

Brand impersonations through domain registration have been a problem for a while, but opening the Top-Level Domain (TLD) space has made it significantly worse. It’s created the opportunity for more extensions and variations of genuine website names – an opportunity exploited by criminals, who know it’s almost impossible for an organisation to register every possible variation of its brand name. Threat actors often impersonate brands by adding a variation to the domain name, using techniques such as [brand name].[xyz], [brand name].[fun], or similar.

Criminals are prioritising brand impersonations of genuine organisations like Microsoft and DocuSign, which provide services regularly used by employees. They are also platforms that tend to be strongly associated with the sharing of sensitive data. Staff are familiar with the real brands and often harbour fewer concerns when interacting with them and asked to input data. A highly personalised, well-worded and designed email, along with a brand impersonation website can convincingly trick employees into sharing trusted credentials.

These types of attacks are growing and developing, with more sophisticated domain impersonations, including lookalike domains and homoglyph attacks that evade traditional email filters. To protect against this, fintechs should be proactively monitoring domain registrations, ensuring threat intelligence programmes collect and analyse data from across the open, closed, deep and dark webs.

From text-based to image-based phishing

Image-based phishing attacks are an evolution of more traditional, text-based phishing messages. They represent a threat actor’s response to advances in email security filters. Embedded images are used to bypass email filters, with the image used to disguise malicious content or links. The image will lead unsuspecting employees to credential-harvesting websites or deploy malware to steal authentication credentials.

Attacks of this nature are becoming more complex. In some instances, images might be created to look like a text-based email to improve its authenticity, while still bypassing conventional email filters. Criminals may also continually edit and adapt images by changing colours or size. This is often done to keep an image fresh, so that it increases its chances of avoiding detection. This way, if an original image is flagged as suspicious, the slightly adapted variations still have a chance of bypassing email filters.

The transition from text-based to image-based phishing messages, as well as the quick adoption and manipulation of AI by criminals, shows how threat actors are advancing phishing techniques. Threats are ever-changing and becoming increasingly personalised to maximise the success of an attack. It’s crucial that fintechs stay ahead of this evolution by building threat intelligence that enables risks to be proactively identified and mitigated. Being defensive and reactive is no longer an option when criminals are increasing the scale and plausibility of phishing attacks.

About Recorded Future:

Recorded Future is the world’s largest threat intelligence company. Recorded Future’s Intelligence Cloud provides end-to-end intelligence across adversaries, infrastructure, and targets. Indexing the internet across the open web, dark web, and technical sources, Recorded Future provides real-time visibility into an expanding attack surface and threat landscape, empowering clients to act with speed and confidence to reduce risk and securely drive business forward. Headquartered in Boston with offices and employees around the world, Recorded Future works with over 1,900 businesses and government organizations across more than 80 countries to provide real-time, unbiased and actionable intelligence. Learn more at recordedfuture.com

Share:

Posts you may like

Send Us A Message



Follow us on Social Media

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles


By checking this box, you acknowledge that you have read and agree to our [Privacy Policy] and [Terms of Service].