
From deepfakes to artificial intelligence (AI)-powered biometric bypasses, the ways fraudsters are leveraging technology are evolving rapidly. This article explores how cybercrime is increasingly leaning on cutting-edge innovations, and how, in turn, financial institutions can bolster their long-term security and detection measures.
AI has taken by storm the criminal underbelly of financial services, as much as it has its legitimate counterpart. In all too many cases, fraudsters seem to have embraced the development more successfully than the institutions themselves. There has even emerged an underground network of marketplaces and AI-enabled dark web search engines, providing bad actors access to polymorphic malware creation engines, biometric bypasses, AI chatbots configured for fraud, and more.
The democratisation of generative AI (GenAI) adds yet another layer of complexity to the picture, thanks to the technology’s ability to dodge detection during phishing attempts, by facilitating, among other things, the cloning of voice and video images via deepfakes. Analysis from VISA shows that these kinds of social engineering attacks are increasingly targeted at alternative payment rails – such as account-to-account payments or crypto – since they are subject to fewer and less mature safeguards. This is of concern to institutions and consumers alike, because today highly connected, instant ecosystem depends upon the robustness of payment rails.
In order to respond to this growing threat, financial institutions must modernise their security strategies, adopt advanced AI-enabled decisioning – across authentication, authorisation and real-time payments – and, ultimately, guarantee that the flow of business remains uninterrupted.
So, what should institutions do in the next 12 months to scale this mountain? How are consumer expectations and new payment rails impacting the landscape? What is the role of merchants? This article seeks to answer these questions and more, while considering how the financial services industry should respond to, and harness, AI in 2025 and beyond.
A tectonic fraud landscape: Cybercrime and AI
Despite what some sensational headlines may suggest, VISA is seeing fraud rates falling in Europe. This context is critical, because the payments ecosystem remains strong, and card payments continue to be one of the safest and most secure ways for customers and merchants to transact.
From a fraud and threat perspective, however, there are of course evolving risks. Most significant is the increasing ability of cybercriminals to share information over the internet – be it to spotlight money-making opportunities, successful fraud practices, or key vulnerabilities in chains. While information sharing has principally been taking place in the financial crime underworld for some decades, the openness of communication is a very contemporary challenge, and is facilitated by a rising number of channels – be it Telegram, X (formerly Twitter), coding forums, or other, dark web arenas. These networks can, in some cases, result in group activity, embroiling career fraudsters as well as individual opportunists.
But the bad actors are now benefitting from better tools as well as better organisation. From automated attacks, to bots, to machine learning models that hone their tactics over time, cybercriminals’ stock of strategies is increasingly rich and varied. In 2025 and beyond, the fraud landscape is likely to be defined by the recent global rollout of GenAI, an innovation that enables bad actors to transcend many of financial institutions’ traditional defences, and target consumers directly.
Perhaps the most challenging issue here is, again, access. While the majority of open-source AI models are bound up with safety frameworks that ensure privacy and the rule of law are observed, cybercriminals have worked out how to make tweaks for nefarious ends. These new, advanced models – with all legal boundaries razed – are perhaps the most challenging developments for both the private and public sectors to answer.
On the ground, the impact of GenAI – notwithstanding its benefits to operations, customer service and product development – is that it has enabled the democratisation of fraud capabilities. In other words, bad actors no longer need to be specialists; individuals, or lone wolves, as they are sometimes referred to, can now rent services via forums on the dark web (at negligible monthly rates) and use them to, for instance, produce effective email phishing content, distribute SMS scam campaigns, or automate the process for creating the command-and-control infrastructures that generate malware-dropper PDFs.
Historically, fraudsters have had no choice but to approach three or four specialists to access such services – from the social engineering aspect to coding and intrusion. Cybercriminals therefore once coagulated into organised groups – comprised of departments that specialise in numerous areas, such as malware development, social engineering, or money mule management. These departments would then have coalesced around a range of targets, based on identified vulnerabilities.
The future of cybercrime, on the other hand, will be characterised by a more centralised access to a gamut of cutting-edge offensive tools; the automation of activity; and more lone wolf attacks. The potential for damage is steep.
Financial institutions vs fraudsters: A game of cat and mouse
The good news is that the financial sector’s payments ecosystem is conducting robust work, and wields equally sophisticated technologies, to combat the threat of cybercrime. This hyper-vigilance forces bad actors to constantly work around institutions’ defences and has obliged them to focus on the very end, and most vulnerable part, of the value chain: merchants, its employees, merchant terminals, and consumers themselves.
The preferred tactic, on the part of fraudsters, involves monitoring the engagement of campaigns, or click-through rates – be it for target populations, geographies, types of networks, or specific languages used in phishing outreach – and doubling down on the most successful iterations. When a tactic becomes tired or no longer yields adequate returns, the outfit then pivots to the next-best strategy. This is a tried-and-tested mechanism that has been practiced for decades, only the technologies used to action it have advanced.
The recommended return-serve from VISA is three-pronged. First, all good actors must appreciate that there are more ways to pay, and be paid, globally than ever before. While the multi-party system has boosted competition and service quality, it has also increased the attack surface. As such, multi-rail strategies are imperative – from risk-based authentication and real-time scoring for card payments and other rails, to time-tested machine learning-based monitoring models.
Second, it is vital that, just as every cybercrime is tailored to the victim, every solution is tailored to the cybercrime. Indeed, banks, merchants and payment service providers must have both global capabilities that run across rails, as well as bespoke measures to respond to and manage their own environments, business lines, customer bases, and vulnerabilities. Mechanisms need to be leveraged that test an organisation’s unique payments security setup.
Finally, and most importantly, these tailored solutions must not stand alone – they should be layered, to cumulative effect. It is useful here to think of security concentrically – from network segmentation measures, to varying trust controls across the employee base, which exist within the perimeter; to all the anti-money laundering (AML), Know-Your-Customer (KYC), and anti-fraud strategies and technologies that extend beyond it. The payment lifecycle should be protected in a similar way, layering security solutions across account-to-account transactions, business-to-business (B2B), business-to-consumer (B2C), card, crypto, and so on. If a successful fraud does take place, compensation measures for merchants and end-users should be simple and automated.
As we look toward 2025 and beyond, GenAI must become a key part of institutions’ arsenals, to inform and deepen transaction risk scoring – for both the payee and payer – as well as facilitate real-time authentication models. It may also be deployed for internal operations – enhancing the speed at which development, coding, and the training of account-attack-intelligence models, can take place. All these techniques provide layered security for stakeholders.
Securing the long-term health of payments
Despite the challenges discussed in this article, card payments remain one of the more secure ways to send money globally. Given today’s rapidly shifting, highly sophisticated, AI-enabled fraud landscape, a multi-rail, multi-layered, and most importantly, tailored approach to payments security is a sure-fire underpinning for long-term security.
In the next five or ten years, payment rails will no doubt continue to evolve. In spite of this, it is the role of each stakeholder to ensure they stay one step ahead of bad actors – taking a more holistic, AI-based view of consumer behaviour.
Comments