In the movie Identity, Pruitt Taylor, Vince’s character, whispers a verse from the poem Antigonish: “As I was going up the stairs, I met a man who wasn’t there. He wasn’t there again today. I wish I wish he’d go away”.
How do you detect and identify criminal behaviour? How do cybercriminals masquerade their actions and make themselves invisible to security systems? Some solutions claim the answer is behavioural profiling.
The term ‘behavioural profiling’ is most commonly associated with airport security and CSI episodes. This interpretation fits quite well with the Wikipedia definition: “Offender profiling, also known as criminal profiling, is a behavioural and investigative tool that is intended to help investigators to accurately predict and profile the characteristics of unknown criminal subjects or offenders. Offender profiling is also known as criminal profiling, criminal personality profiling, criminological profiling, behavioural profiling or criminal investigative analysis”. In recent years, the term ‘behavioural profiling’ has emerged in the online security world, but is it really ‘behavioural profiling’ per se? Sadly, the answer is no. Real behavioural profiling is focused on identifying a potential criminal while computer behavioural profiling systems are focused on identifying normal user patterns. Quite a difference!
Behavioural profiling in the online world is a tough task. The suspected cybercriminal cannot be seen visually and/or analysed for a long period of time, which is not the case in the physical world. This means online behaviour profiling is purely based on a limited set of user actions collected by detection systems. That’s why current detection systems have opted to analyse normal user behaviour, define a normal user profile and then raise a red flag if an action outside of that “normal” profile occurs. This approach should sound familiar: it’s called whitelisting.
The problem with whitelisting, however, is that cybercriminals are aware of its existence and have found clever ways to circumvent it. Following are just three examples of behavioral profiling evasion techniques that have been successfully executed by cybercriminals.
The first technique, which was the subject of a recent investigation by Trusteer’s innovation team and was covered in Trusteer’s blog, involves the use of stolen credentials in an unusual manner. Instead of logging in with the stolen credentials and committing a fraudulent transaction, the criminals access the compromised accounts multiple times but do not make any fund transfers. What are they up to? This type of behaviour clearly shows that cybercriminals are aware of “behavioural profiling” as well as device profiling systems. Since the criminals are using their own device to fraudulently access an online bank account they need a technique that does not trip any fraud detection wires. A new device that immediately adds a payee to an account or tries to send funds will undoubtedly be scrutinised. By accessing an account multiple times with the same device and only performing low risk actions the criminals are able to “familiarise” behavioural and device profiling systems with their device and actions. When a fraudulent transfer is ultimately initiated during a fifth session, the device and its behaviours are not identified as suspicious.
While the above criminal tactic is performed via a new device, the second evasion technique example goes one step further and uses the actual victim’s device. The idea, however, is the same. As long as the device is not flagged as suspicious most fraudulent transactions will go unnoticed. Using malware with RDP/VNC (Remote Desktop Protocol / Virtual Network Computing), the attacker mimics the account holder’s typical transaction amounts. These are easily viewed from a transaction history screen. This allows cybercriminals to commit fraud without raising any suspicion. It is nearly impossible to identify this type of fraud using behavioural and device profiling platforms. Malware developers have also come up with a clever plugin that helps them learn a user’s normal behaviour. Citadel, a popular financial Trojan, enables criminals to capture video from a victim’s device in order to study a user’s behaviour patterns including transaction sums, clickstreams and more.
The third technique also includes the presence of malware. However, this approach uses an automatic script to transfer funds to predefined mule accounts from the victim’s device. Financial malware is capable of injecting complete scripts into a user’s session without the victim noticing any change in browser behaviour. These types of scripts run post login and are configured to take over a session and initiate a fraudulent transaction. This eliminates the need to steal credentials and remotely access the victim’s account, as well as the manual work involved in running RDP/VNC malware. Some behavioural profiling systems can identify these scripts by performing various tests on the activity performed on the transaction page. The most common such test is to analyse how fast the transaction page was filled in. For example, humans cannot fill in all the transaction fields in less than a second, which is exactly what a script does. To appear more “human” and evade detection, fraudsters have incorporated a ‘slow fill’ function in their malware. This function inserts a random pause between each charter input that varies between 0.1s to 2s to make the behaviour of the malware appear “normal”.
To accurately and decisively detect these attack techniques multiple security metrics must be collected, correlated and analysed. These include the presence of malware and RDP access during the online banking session, the evidence of stolen credentials, risk factors and account activity patterns. This holistic view is the only way to achieve true behavioural profiling and uncover “the man who wasn’t there”.