By Jason Duerden, AVP Australia & New Zealand SentinelOne
In just a few years, deepfakes have gone from a novelty tech curiosity to a genuine threat. Anyone can now create life-like images, videos, or audio in seconds, making it appear that someone said or did something they never actually did. And as the technology continues to advance, it will only keep getting faster, cheaper, and more convincing. For the finance industry, where millions can change hands in an instant, this is a clear warning: deepfakes are a serious, global threat, and organisations must act now to safeguard themselves.

Now imagine it’s Friday afternoon and you’re about to clock off for the weekend. A video message pops up from a senior manager asking for an urgent fund transfer or an account change. The video looks real. The voice sounds real. Everything feels legitimate. You act, and minutes later, a significant sum is gone. Not long ago, this would have seemed far-fetched. In 2026, it’s a reality every business could face.
But deepfakes aren’t just about CEO scams or high-value transfers. They fundamentally change the rules of fraud. Criminals no longer need to hack systems or steal passwords. They can impersonate anyone, whether a junior finance officer, a new account manager, or a trusted partner, and convince staff to approve payments or share sensitive information. One video, one audio clip, one request and the risks multiply.
This makes traditional identity verification methods less reliable. Face-to-face checks, phone calls, or video approvals can no longer be treated as proof of authenticity. Even biometric systems like voice or facial recognition can be fooled.
So, how should Australian banks, fintechs, and corporate finance teams respond when familiar faces and voices can no longer be trusted?
Fraud in the age of deepfakes
The finance sector is particularly exposed because so much depends on trust in identity. Transactions, approvals, and account changes often rely on the assumption that the person on the other end is who they appear to be. Deepfakes put that trust at risk in multiple ways.
Executive impersonation and corporate fraud are perhaps the most headline-grabbing risks. Fraudsters can create videos or audio clips of CEOs or CFOs instructing employees to transfer funds, approve loans, or release sensitive information. In 2024, engineers at UK-based Arup were convinced by a deepfake video showing their CFO giving instructions to transfer $25 million to accounts in Hong Kong. Believing the video was real, they authorised the payments, only to discover later that they had fallen victim to sophisticated fraud. The incident shows just how convincing these attacks can be and serves as a warning that similar schemes could target businesses anywhere, including Australia.
Onboarding and identity checks, often called KYC or “Know Your Customer” processes, are a critical line of defence for financial institutions. They make sure the person opening an account or accessing services is who they say they are. But deepfakes are putting that system at risk. Criminals can use face swaps or replicate voices to pass biometric checks, opening accounts under fake identities or taking over existing ones.
Stock manipulation and misinformation are emerging concerns as well. Fake videos of executives announcing acquisitions, earnings changes, or policy updates can trigger panic selling or artificial buying before the real news even emerges, affecting share prices and investor confidence.
Preventing fraud in a deepfake world
Detection alone is not enough. Tools that flag manipulated videos or audio are important, but reactive. Sophisticated fraudsters can refine deepfakes based on system responses, making each attempt harder to catch. In high-risk areas like payments, account changes, and executive approvals, this creates a persistent, evolving threat.
To stay ahead, organisations need a layered approach that goes beyond detection:
1. Out-of-band verification
Critical requests should be confirmed through multiple channels. Don’t rely solely on a video or voice call. High-value transactions, account changes, or sensitive approvals should be verified through secure messaging, in-person sign-off, or cryptographic signatures. Independent verification breaks the chain of trust that deepfakes exploit.
2. Employee training
Staff awareness is key. Surveys show that Australians are only able to correctly distinguish between real and AI-generated images 42% of the time, which is below the chance of a random guess. Employees should be trained to recognise unusual communication patterns and instructed to verify instructions independently. A culture of caution can stop attacks before they happen.
3. Advanced authentication
Where possible, use technology that goes beyond simple biometrics. Liveness detection, micro-expression analysis, and cryptographically verifiable media make it much harder for fraudsters to succeed. Some organisations are exploring blockchain-based verification to confirm the authenticity of sensitive content at the point of creation.
4. Clear policies and governance
Define procedures for handling high-risk transactions, verifying communications, and authenticating digital content. These rules should be regularly updated to account for new techniques and evolving threats. When everyone knows the protocol, there’s less room for mistakes.
5. AI-assisted monitoring with human oversight
Detection tools remain useful. Multi-layered AI monitoring can flag anomalies in video, audio, or document submissions. But AI alone is not enough. Human oversight is crucial to catch edge cases and interpret the alerts in context.
No single solution will stop deepfakes. Organisations need a combination of careful verification, staff awareness, advanced authentication, clear policies, and intelligent monitoring. When layered together, these measures create a far more resilient defence against a threat that is constantly evolving.
Staying one step ahead
Deepfakes are evolving fast and so must the financial industry. Across Australian banks, fintechs, and corporate finance, trust can no longer be assumed; it has to be verified. When the image of a person or message becomes more believable than the reality, identity verification becomes the foundation of digital credibility. Detection, strong guidelines, and staff training remain essential. Tools can flag manipulated media, policies can define how high-risk transactions are handled, and employees can learn to recognise unusual requests and verify instructions independently.
The battle between creating and detecting illusions is ongoing, which is why a multi-level approach is crucial, starting with the user and extending to AI-assisted monitoring. Organisations that act now, layering verification, training, and monitoring, will be the ones that protect their money, their people, and their reputation when the next deepfake attack hits.

