In the evolving landscape of digital transactions, most payment fraud prevention strategies are based on digital identity authentication.
They are designed to provide a certain level of confidence that the individual behind each transaction really is who she claims to be.
Once authenticated, authorized individuals will be approved to use digital payment mechanisms and complete their transactions.
Most payment fraud prevention products collect data to reach that goal, including user personal attributes, device data, username and password, payment details, biometric markers, and more.
Fraudsters, however, have consistently found new, innovative ways to circumvent these measures, such as by faking their digital identity, using verified accounts, and breaching defenses.
This ongoing battle has driven many startups to seek the holy grail of an unimpeachable identity verification system.
As many of these start-ups have discovered, designing an infallible method for authenticating digital identities is not simple.
In the world of scalable fraud, the efficacy of digital identity authentication is close to zero, even when supported by AI and machine learning – technologies that have been used for over 10 years in the field of payment fraud prevention.
Getting Inside Scalable Fraud
Scalable fraud relies on industrialized, automated processes making repeated purchases with stolen payment methods at a high-velocity.
Efficacy is key here, as threat actors must work very fast to commit the highest amount of fraud while they remain undetected – usually a question of one or two hours at the most.
The problem with scalable fraud is not only the seismic effect of highly sophisticated fraud technology concentrated in a very short amount of time.
The issue also comes from the fact that fraudsters invest in verified accounts to mask the fraudulent nature of their transactions.
In such an environment, legacy products are outsmarted: they authenticate the digital identities of the verified account owners, approve most individual transactions, and open up to massive fraud losses that destroy merchants in no time.
To prevent scalable payment fraud, a different approach is needed where behaviour is the identity. The objective is to be able to capture even the weakest signals indicative of a pattern – the very signature of fraud in formation – and prevent it before it hits.
Imagine 100 unique people purchasing an Apple gift card from a website. Each one would type at their own pace. Some would copy and paste their credit card information, while others would have it auto-fill. Still, others would type in the numbers at the time of purchase.
Of those 100 customers, a large number would probably have their email address auto-fill in as well. If you factor in typing speed, using a mouse or tab to move from one field to the next, and the type of device a customer is connected to the store with, you may get 100 unique purchase behaviors.
By contrast, an automated process will be fairly consistent across all purchases, even with AI mimicking human behaviour.
Fraudsters can try to make each purchase appear unique by building delays onto the script they are using, but creating an automated process to make every transaction appear unique is difficult, time-consuming, and costly for the fraudster.
This example is simple. Now, imagine monitoring tens of thousands of those types of behaviours, eliminating irrelevant data elements, then combining types among them and compounding the data into hundreds of thousands of data elements, a bit like looking at the transaction in hundreds of thousands of pertinent dimensions.
The more granular the filter, the more accurate it will be in detecting even the weakest and most improbable signal indicating a behavioural pattern is forming, and it will prevent fraud where, taken individually, each transaction may seem clean and ready for approval.
Instantly comparing hundreds of thousands of combinations in behaviours leading up to the purchase will detect anomalies due to unexpected similarities, and will tag fraud.
What Type of Behaviours Indicate Scalable Fraud
In the realm of fraud prevention, certain behaviours have long been considered indicators of compromise (IOC). A single IOC doesn’t usually mean that the user is acting fraudulently. However, as IOCs build upon one another, they can quickly reach a threshold indicating that a scalable fraud scheme is underway.
Most of these IOCs can be placed into one of three categories: Data entry behaviour, navigation behaviour, and device behaviour.
Data Entry Behaviour
In addition to using autofill or copy-and-pasting behaviours, there are several other ways an AI system will be able to detect fraud.
Completing checkout forms at an unusually fast pace may suggest the use of automated tools or scripts that are designed to quickly fill out and submit stolen information.
Genuine users typically take time to review and ensure the accuracy of their input, so a rapid completion time is often a red flag for automated or fraudulent activity.
Additionally, inconsistent typing patterns, such as very rapid or erratic keypresses, may also indicate the use of automation tools or scripts.
These patterns can reveal a lack of familiarity with the data being entered, further pointing to fraudulent behavior. For example, a user inputting data in a hesitant or unfamiliar manner may be working from stolen information rather than their own personal details.
Consistent data entry behaviour across multiple user accounts indicates a single source or entity behind the transactions, which is most likely fraudulent.
Ultimately, the patterns found across multiple transactions create signals that flag scalable fraud.
Navigation and Interaction Behaviour
Another strong indicator of fraud is forms that are filled out in a manner that aligns with the sequential, mechanical actions of an automated script. Unlike natural human interactions, which may involve pauses, backtracking, or checking details, scripted navigation is linear and efficient, designed to minimize the time spent on each page and maximize the success rate of fraudulent transactions.
A user repeatedly going back and forth between form fields can indicate uncertainty about the information being entered, which is a common behaviour among fraudsters who are testing various stolen credentials.
This behaviour suggests that the individual may be unsure of the data they possess and is attempting to find the correct combination through trial and error.
Finally, suspicious behaviour can also be detected through inconsistent interactions with the website’s user interface.
For instance, quickly closing pop-ups, avoiding certain buttons, or skipping optional fields are actions that deviate from typical user behaviour.
These inconsistencies can be indicative of someone who is unfamiliar with the website, possibly because they are using it for the first time to commit fraud, or because they are employing automated tools that are not programmed to handle such elements.
Technology and Device Behaviour
Fraudsters may use less common or outdated browsers to bypass certain security measures or to operate automated tools that are incompatible with modern browsers.
This behavior can be an indication of an attempt to avoid detection by using technology that is less scrutinized or to leverage vulnerabilities in older software.
Another behavioral IOC is the use of virtual keyboards or on-screen typing methods. While this behavior can be legitimate in high-security environments, its use in routine transactions may suggest that the user is taking extra precautions to avoid detection, which is often associated with fraudulent activities.
Moving Forward
The above examples are well known, and still work relatively well in the simple fraud landscape found in mainstream eCommerce – though fraudsters keep advancing in technological sophistication, and legacy fraud prevention products start to show alarming signs of efficacy erosion as scalable fraud will soon hit them like a tsunami.
When it comes to scalable payment fraud prevention, it seems that merchants need to adopt three key solution components:
- Highly granular filters. By compounding vast amounts of data into a representation of the payment landscape across hundreds of thousands of dimensions, these solutions create highly granular filters capable of detecting even the faintest signals of fraudulent behavior patterns as they emerge.
- Dedicated models. By using a dedicated model for each merchant, focusing exclusively on relevant data, these systems ensure effective analysis and the highest accuracy of results.
- Real-time adaptation. It is also fundamental to stay ahead of the evolving fraud landscape. Inasmuch as highly granular models dedicated to each merchant are necessary, they only make sense if they can rapidly transform in order to adapt to constantly evolving threat landscapes. There is a new flavor of AI called fully automated adaptive AI, a technology that enables the rapid development of new models and their deployment in under an hour, allowing real-time adaptation to emerging threats.
While the perfect payment fraud prevention solution may never exist, considering behavior as the new identity provides a legitimate path forward in fighting fraud and turning it into a profit center. When identity authentication transitions to focus on hundreds of thousands of combined behavioural patterns, more robust payment fraud prevention emerges.
As legacy methods of identity verification become increasingly vulnerable under the seismic pressure of fraud at scale, embracing fully automated adaptive AI offers a more resilient and adaptive approach, providing a safer environment for digital commerce.