By Fred Slikker, Managing Director at Digidentity
The conversation around fraud in Australia is increasingly dominated by the potential of AI-driven detection. Financial institutions, fintech companies, and payment processors are prioritising models designed to monitor transactions as they happen, identifying anomalies to halt questionable transfers immediately. While these advancements are important, they may overlook a critical, more unsettling issue, that the possibility that an individual should have been barred from accessing the system at the very outset.
AI is changing how fraud is created
This is where financial services needs to sharpen its focus. AI is changing fraud detection, but it is also changing fraud creation. Criminals can now generate more convincing documents, build synthetic identities, imitate voices and faces, and automate interactions at a speed that rule-based systems were never designed to handle.

The consequences are already evident across Australia. According to the National Anti-Scam Centre, Australians filed over 481,000 scam reports in 2025, with total losses climbing to $2.18 billion. While the number of reports has leveled off, the 7.8 per cent year-on-year increase in financial losses indicates that criminals are successfully securing larger payments through fewer touchpoints. Investment scams remained the most damaging, totaling $837.7 million in losses, followed by other prevalent methods such as payment redirection, phishing, remote access, and romance scams.
The public is also starting to encounter AI-enabled fraud in more personal ways. CommBank research released earlier this year found that 27 per cent of Australians had witnessed a deepfake scam in the previous year. The most common examples were investment scams, business email compromise and relationship scams. That should concern every financial institution. Deepfakes and synthetic identities attack the foundations of trust. They are designed to make a fake person, fake instruction or fake relationship look legitimate enough to pass through digital systems built for speed and convenience.
Real-time monitoring still starts too late
For years, fraud controls have leaned heavily on rules. Unusual transaction size, login from a strange location, rapid movement of funds, a new device, or changes to account details. Those rules are still useful. But they are blunt instruments in an environment where criminals can simulate normal behaviour, test controls at scale and use stolen or fabricated identity material to create a more convincing starting point.
AI improves detection by looking across more signals. It can identify anomalies faster than a human team, connect patterns across large datasets and flag behaviour that may otherwise go unnoticed. In banking and payments, real-time monitoring is becoming essential. But real-time monitoring has one important limitation; it assumes you know who you are monitoring.
A synthetic identity may not look suspicious on day one. It may use a mix of real and fabricated details, pass basic data checks, behave conservatively and build credibility over time. By the time unusual behaviour appears, the account may already have access to credit, payment rails, sensitive customer services or business systems.Â
The first line of defence is identity
That is why the first line of defence needs to move earlier. Before someone opens an account, accesses a financial product or acts on behalf of a business, providers need high confidence that the person is who they say they are.
That means checking a government-issued identity document, confirming the person is physically present through a live biometric check, and using additional signals such as device analysis to assess risk before access is granted. No single control is enough. The answer to synthetic identity fraud is layered defence.
This also changes the way financial institutions should think about false positives. Fraud prevention and customer experience are often treated as opposing forces. The assumption is that stronger checks create more friction, while faster onboarding creates more risk. That trade-off is becoming outdated.
Better identity verification at the front end can reduce friction downstream. When a financial institution has high confidence in a customer’s identity from the start, every later check becomes more accurate. Legitimate customers are less likely to be incorrectly blocked. Risk teams can focus attention where it is actually needed. False positives become easier to reduce because the underlying identity signal is stronger.
Australia’s reforms raise the stakes
This is particularly important as Australia’s regulatory environment tightens. AUSTRAC’s AML/CTF reforms are designed to close gaps in the financial system that organised crime has exploited, with professional services required to fully comply with AML/CTF obligations from July.
That reform will bring more sectors into the financial crime compliance ecosystem, including lawyers, accountants, real estate professionals and others involved in high-value transactions. It will also raise a practical challenge; identity checks will happen across more points in the customer lifecycle, often across organisations with different systems, standards and levels of maturity.
If identity remains fragmented, fraudsters will exploit the weakest entry point. The direction of travel should be toward reusable, high-assurance identity. Verify someone thoroughly once, then allow that verified identity to be reused securely in other settings where trust matters.
We are already seeing this model emerge in adjacent sectors. In Australia’s motor vehicle repair sector, Solera Autodata and Digidentity have launched an MVIS-compliant solution that enables workshops to verify technicians and manage access to hybrid and EV repair data. The model is simple. The technician verifies once through our digital wallet app, then uses that verified identity to access services where proof of identity and credentials matters.
The next fraud battleground is trust
AI will remain essential to fraud detection. It will help financial institutions identify unusual behaviour, respond faster and adapt to new attack patterns. But AI cannot compensate for weak trust at the point of entry. If a criminal can create a convincing synthetic identity and pass onboarding, the rest of the system is already playing catch-up. Australia’s fraud challenge is now a trust challenge. The institutions that respond best will be the ones that combine AI monitoring with stronger identity foundations, so they can stop more fraud without making genuine customers feel like suspects.
Better identity verification during onboarding can reduce friction downstream. When a financial institution has high confidence in a customer’s identity from the start, every later check becomes more accurate. Legitimate customers are less likely to be incorrectly blocked. Risk teams can focus attention where it is actually needed. False positives become easier to reduce because the underlying identity signal is stronger.

