Outthinking the Mules: Evolving AI and Strategy in the Fight Against Financial Crime
- TrustSphere Network

- Jun 9, 2025
- 4 min read

Money mules (not the shoes)—once a peripheral threat—have become central players in the architecture of financial crime. As criminals grow bolder and more technologically advanced, the tools used to detect and disrupt these networks must evolve just as quickly.
At the Fraud & Financial Crime 2025 conference hosted by Regulation Asia, experts from across the financial ecosystem offered urgent insight into the limitations of current systems and the transformative potential of a more dynamic, AI-enabled approach.
One theme emerged consistently: traditional AI and static models are no longer sufficient to keep pace with the speed and complexity of modern mule operations.
Anthony Hope, Group Head of AML, CTF and Fraud Risk at National Australia Bank, noted that data from the pre-pandemic era is increasingly irrelevant in the post-COVID landscape. Money mule behaviour has changed drastically—new typologies are fluid, digital-first, and far more nuanced than those seen just a few years ago. As a result, many banks relying on outdated models are missing critical indicators.
David Hardoon, Global Head of AI Enablement at Standard Chartered, underscored the failure of traditional ontologies—AI frameworks that assume structured, predictable behaviours. Financial crime, he argued, is now defined by its unpredictability. Layered, evolving behaviours blur the line between fraud, AML, and cybercrime. Rigid rules and outdated assumptions are simply not fit for purpose in this new environment.
These insights point to the need for AI systems that are capable of learning, adapting, and retraining in real time. Banks must pivot from relying solely on historic patterns to detecting deviations in behaviour—what Hardoon called the “signal in the noise.” It’s no longer about matching red flags to past cases, but understanding behavioural drift and context.
Chris Huang of DBS painted a compelling picture of the evolution from physical mules—individuals carrying illicit cash across borders—to “crypto mules” executing high-velocity transfers that disappear into the blockchain within minutes. The challenge now lies in real-time intervention. At DBS, this has meant investing in pre-transaction monitoring and early account flagging systems. The bank reports that more than 90% of mule accounts it blocks never follow up, a powerful validation of proactive detection.
Mike Yeardley of Feedzai offered a stark reflection: the industry is playing catch-up. Mule detection has long been underfunded, siloed, and de-prioritized in favour of higher-profile fraud types. But the consequences of that oversight are becoming clear. Mule networks are at the heart of scams, synthetic identities, and money laundering pipelines. Focusing on the integrity of customer relationships—understanding who the person behind the transaction really is—must become a foundational pillar of future detection models.
Technology alone, however, is not enough. The panelists called for a strategic shift in how banks engage with customers and respond to risk. Hope argued for a more nuanced model of intervention. Not every anomaly should trigger a full-blown investigation. Sometimes, a phone call, SMS verification, or contextual query can be enough to clarify a transaction—or deter a mule in action. This graduated response model not only reduces false positives but respects the customer experience.
Customer education emerged as another vital, yet underleveraged, line of defence. Generic warnings have limited effect—particularly among younger users who may not even recognise their actions as criminal. The growing trend of students selling accounts after graduation, for example, highlights the need for targeted, timely education campaigns. Hardoon proposed taking a page from the playbooks of Uber Eats or Amazon: use hyper-personalised messages based on transaction history, risk signals, and customer profiles to deliver meaningful, timely warnings. This kind of contextual education creates “teachable moments” that increase awareness without disrupting experience.
A broader solution will also require collaboration. Huang described how DBS’s partnership with law enforcement has allowed them to act quickly on intelligence, shut down mule accounts, and use those interactions to improve internal models. The feedback loop between enforcement and detection is critical. Yeardley highlighted the role of technology vendors in facilitating secure, cross-institution intelligence sharing without compromising customer privacy—particularly useful when tracking coordinated mule rings that operate across multiple banks.
The reality is that money mule networks are agile, decentralised, and increasingly powered by their own AI capabilities. As Huang bluntly noted, “the scammers are using AI better than us.” That imbalance must change. Banks need to treat AI not just as a compliance tool but as a strategic asset. That means investing in explainable, scalable, and interoperable systems that can evolve in tandem with criminal tactics.
Equally important is a shift in mindset. The line between AML and fraud is dissolving. Risk detection must be unified, behaviour-led, and context-aware. The same applies to internal operations—fraud teams, AML units, cybersecurity and data science must collaborate around shared goals, not operate in isolation.
Ultimately, the panel’s message was clear: fighting money mule activity requires a multi-dimensional approach. Technology, education, and cross-sector collaboration must work in concert. Banks must learn faster, intervene earlier, and connect the dots more effectively than the criminals they face.
It’s not just about keeping up—it’s about getting ahead.
This editorial is based on insights shared at the 2025 Regulation Asia Fraud & Financial Crime conference. For more on financial crime trends, AI innovation, and cross-border collaboration, follow Regulation Asia and other trusted industry forums.



Comments