When the fraud looks real: AI’s role in detecting what the human eye misses

By Yifei Wang, AI lecturer at the University of Melbourne, and co-founder & CEO of AIBUILDĀ 

Australians lost $2.18 billion to scams in 2025 – a 7.8% increase on the year prior, despite total report volumes holding steady. Losses are rising even as awareness grows, and that tells you something important: the scams are getting better.

Yifei Wang AIBUILD PR L
Yifei Wang

At AIBUILD, through our work on the BRII Cyber Security Challenge, we’ve had a front row view of how fast these threats are evolving. What used to require criminal networks and technical expertise can now be produced in minutes with a consumer AI tool. The kind of fraud that’s hitting Australians now is sharper, more personalised, and increasingly difficult to distinguish from the real thing.

Old-school fraud detection is no longer enough

Australia’s major banks have used machine learning to detect fraud for over a decade. However, while these existing systems are effective for traditional fraud, emerging AI-generated identity and content fraud requires an additional detection layer.

The next generation of fraud doesn’t trip a transaction wire; it presents a person who doesn’t exist, with documents that look genuine, and a history that checks out. Detecting that requires a fundamentally different layer of AI; one trained to interrogate whether the person behind a transaction is real, not just whether the transaction itself looks normal.

Phishing scam losses in Australia nearly tripled last year-on-year, rising from $4.6 million to $13.7 million, when comparing the first four months of 2024 against the same period in 2025. This was in the same window, twelve months apart, but almost three times the damage.

These weren’t mass spam attacks either, but ones that were targeted, convincing, and increasingly difficult to tell apart from legitimate communications. This is because AI tools are now being used to write the messages and generate the imagery that makes them look real.

AI-powered fraud detection works differently. 

Instead of following a fixed checklist, it learns what normal behaviour looks like for a given customer, account, or device, and flags anything that breaks that pattern. It doesn’t need to have seen that exact type of fraud before; it just needs to notice that something doesn’t quite add up.

Fake faces, fake documents, fake people

The most dangerous fraud happening in Australia right now doesn’t involve breaking into systems or stealing passwords, but making things look real and like they are from a legitimate company or the government, when in fact, they are entirely fabricated.

This isn’t theoretical for AIBUILD. We were one of five companies selected nationally by the Australian Government under the Business Research and Innovation Initiative Cyber Security Challenge (BRII) – a program built around one very specific problem: how do you verify that information claiming to be from an official source is actually real?

That question sits at the heart of everything happening in fraud right now. AI image generation capabilities now available through mainstream platforms can now produce fake identity documents, realistic profile photos, and official-looking correspondence that is increasingly difficult for human reviewers to identify as false.

This isn’t the blurry, obviously fake content of a few years ago. These are clean, convincing outputs that are passing the checks that banks, government agencies, and financial institutions currently rely on.

Working on the BRII project gave us a direct view into how quickly AI-generated content can undermine trust in communications that people have every reason to believe are genuine. A fake message that looks exactly like one from your bank, a document that mirrors an official government letter, or a face attached to an identity that doesn’t actually exist. The same technology that fraudsters are using to scam Australians out of billions is the technology we are building tools to detect.

Both ASIC and the Australian Cyber Security Centre have publicly flagged AI-generated content as a growing driver of scam activity, with ASIC’s Commissioner warning that deepfake videos are being used to lure Australians into fake investment schemes. Everything we see in our work in 2026 confirms that this is accelerating, not slowing down.

The most effective solution is to use AI to detect AI. 

This doesn’t mean replacing human judgement, but augmenting it with systems capable of detecting signals that human reviewers can’t reliably see and assess. 

At AIBUILD, we build models trained to spot the tiny inconsistencies that AI-generated images and documents leave behind. These giveaways may include visual artifacts, language-pattern anomalies, document-structure irregularities and behavioral signals across different accounts and devices.

The cost of getting it wrong in both directions

There are two ways fraud detection can fail. The obvious one is missing the actual fraud. The less visible one is flagging real customers as suspicious; blocking a legitimate transaction, rejecting a genuine application, or freezing an account that belongs to a real person.

Both carry a cost. AI systems that are built and managed well reduce these unnecessary blocks by building a fuller picture before making a decision. Rather than simply asking “is this unusual?”, a well-designed system asks “is this unusual for this specific person, at this time of day, on this device, given their history?” That extra context means fewer genuine customers get caught in the net.

This is also becoming a legal requirement. The Scams Prevention Framework now rolling out in Australia requires banks, telecommunications companies, and digital platforms to actively show that they are protecting customers from scams, and to explain how their systems work and why decisions are made. Vague assurances are no longer enough.

The time to act is now

Fraud losses in Australia are rising even as awareness and reporting improve. The tools available to criminals are advancing quickly, and the gap between what older detection systems can catch and what AI-enabled fraud can now get away with is widening every month.

AIBUILD’s work across image and message analysis and AI generation detection shows that the technology to meet this threat already exists. The fraudsters are already using it; the question is whether financial institutions are moving fast enough to deploy it on the other side.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Latest Articles