AI is baseline in fraud and AML, but complexity drives higher costs
AI is now standard in fraud and AML operations, with 98% of organizations using it and 95% confident in its effectiveness.
Artificial intelligence is now standard in fraud detection and anti-money laundering. Yet for many teams, the job is not getting easier. A new report from SEON, AI Reality Check: 2026 Fraud & AML Leaders Report, surveyed 1,010 fraud, risk, and compliance leaders across payments, fintech, financial services, retail, eCommerce, and gaming. The results show near-universal AI adoption, paired with rising budgets, expanding teams, and persistent operational strain.
The findings land as banks confront real cases of AI-enabled fraud. Recent examples include the Commonwealth Bank in Australia which has referred a suspected A$1 billion home loan fraud to police, with some loan documents allegedly generated using artificial intelligence. While the loans are secured against property and still being repaid, the scale of the case shows how quickly AI-assisted fraud can grow.
AI adoption is high, but so is risk
98% of respondents said their organizations already use AI in fraud and AML workflows. Only 2% are still planning deployment. Confidence is strong as well. About 95% believe AI can detect and prevent fraud, with more than half saying they are very confident.
Still, that confidence has not reduced concern.
Compared to last year, far fewer leaders disagreed with the statement that fraud losses are growing faster than revenue. The share of those who pushed back on that idea dropped by nearly 40%. In short, fraud is keeping pace with business growth, if not outstripping it.
The Commonwealth Bank investigation reflects that tension. Even with heavy investment in fraud controls, banks are facing forged documents, synthetic identities, and AI-generated materials that are harder to detect.
Account takeovers remain the top reported threat at 26%. Promo and discount abuse, along with return fraud, each stand at 18%. A quarter of leaders also point to criminals' growing use of AI and obfuscation methods as a rising concern.
PwC's Penny Dunn warned that AI can create "more sophisticated documentation forgery and also synthetic identities and deepfakes," adding that such materials are "very difficult for the human eye to see."
Spending and staffing keep growing
If AI were reducing workload, budgets might be flat. Instead, spending is climbing.
83% of respondents expect fraud and AML budgets to increase in 2026. 94% plan to hire at least one full-time employee, up from 88% the year before. 85% expect to add a new vendor, while nearly half plan to replace one.
Commonwealth Bank said it invested A$900 million last financial year to protect customers from fraud, scams, cyber threats, and financial crime. That level of spending mirrors what many leaders in the report describe: AI adoption is not cutting costs. It is driving further investment.
Most respondents see AI agents as support tools rather than replacements. 85% view them as augmenting staff. Only 12% expect them to eventually replace roles.
"Fraud and financial crime were supposed to become more manageable as AI matured," said Tamas Kadar, CEO and co-founder, SEON.
"Instead, 2026 is the year leaders are confronting a more complicated reality. AI adoption is real, confidence is high, but the scale and pace of fraud — compounded by fragmented systems — continue to drive increased investment rather than reduced overhead. The bottleneck is no longer whether AI works. It's everything around it: disconnected data, siloed teams, slow implementations. The organizations that pull ahead will be the ones that unify fraud and AML intelligence, shorten the distance between threats and controls, and treat integration as strategy, not plumbing,” Kadar said.
Integration and trust are the next hurdles
While 95% of leaders say their fraud and AML systems have some level of integration, only 47% run fully connected workflows. The rest rely on partial links between tools. 80% say getting a unified view of data is challenging.
Deployment timelines also remain slow. Only 10% can go live with new systems in under two weeks. For many, implementation takes one to three months or longer. When projects drag on, 52% report higher costs, and 47% say they face longer exposure to fraud.
Fast-growing companies appear to handle this better. Organizations expanding by more than 51% are almost twice as likely as slower peers to say unified visibility is not very challenging. They tend to treat integration as core infrastructure rather than an afterthought.
As AI becomes standard, the debate is shifting. The question is less about whether it works and more about whether it can be trusted and governed properly. 78% believe decentralized digital identity will become central to fraud and AML efforts. A third point to data privacy rules, including GDPR and CCPA, as the biggest outside force shaping AML.
Recent consumer research from Commonwealth Bank adds another layer. While 89% of Australians believe they can spot a deepfake, only 42% could correctly identify one in testing — a gap that suggests overconfidence may be part of the problem.
AI may now be baseline in fraud prevention. But as these findings show, wider adoption has not simplified the fight. For many organizations, it has made the scale of the challenge clearer.