Are we heading towards a world of AI-driven malware?

We’re no strangers to ‘AI in cybersecurity’, but what about when it’s in the wrong hands?
20 June 2019

The next level in malware? Source: Shutterstock

Artificial Intelligence (AI) and Machine Learning (ML) technologies have been eagerly embraced across industries, and the cybersecurity sector is no different.

Here, some vendors— or at least their marketing departments— say that AI-driven ‘smart detection’ is the only way to keep pace with the proliferation of new threats.

But what we hear less about is how threat actors on the other side of the barbed-wire fence could be about to deploy the same types of technologies to their own advantage.

Some of the more headline-grabbing examples could be the use of convincing AI-driven DeepFake videos in sophisticated spear-phishing attacks— the next stage of evolution for impersonation attacks.

To similar ends, cybercriminals could use AI to scan social media sites, identifying prime target individuals; or turn the technology to the rapid creation of personalized spam emails.

Captcha solving, meanwhile, is already proving child’s play for machine learning.

However, according to Malwarebyte’s new white paper When Artificial intelligence Goes Awry, AI technology could soon bring us into the unwelcome age of ‘malware 2.0’.

While there are currently no examples of AI-powered malware ‘in the wild’ says Malwarebytes, if the technology opened new avenues for profit, “threat actors will be standing in line to buy kits on the dark market or use GitHub open source […]”

Malware that adapts and learns with AI

The biggest concern regarding AI’s use in malware is that new strains would be able to learn from detection events. If a strain of malware was able to determine what caused its detection, the same behavior or characteristic could be avoided the next time around.

If a worm’s code was the reason for its compromise, for example, automated malware authors could rescript it. If attributes of behavior caused its detection, randomness could be added to foil pattern-matching rules.

The use of AI could also improve a method of certain Trojan malware variants, where they create new file versions of themselves to fool detection routines.

IBM and DeepLocker malware

While some believe AI-driven malware could be years away— if on the horizon at all— IBM has researched how existing AI models can be combined with current malware techniques to create ‘challenging new breeds’ in a project dubbed DeepLocker.

Comparing its potential capabilities to a “sniper attack” as opposed to traditional malware’s “spray and pray” approach, IBM said Deeplocker was designed for stealth: “It flies under the radar, avoiding detection until the precise moment it recognizes a specific target.”

In this case, a deep neural network AI model is used to unlock the attack when numerous but unique trigger conditions are met— otherwise, it remains out of sight.

DeepLocker is able to leverage the “black-box” nature of the DNN AI model to conceal the trigger condition. A simple “if this, then that” trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher. In addition to that, it is able to convert the concealed trigger condition itself into a “password” or “key” that is required to unlock the attack payload.

Should you prepare for AI malware?

With DeepLocker demonstrating the capabilities are there, it’s not hard to imagine that other forms of AI-driven malware could be in production, suggests Malwarebytes.

Newly-acquired traits as a result of the technology could include the ability to change behavior or characteristics based on environment, deleting itself when it suspects it’s being analyzed and deploying malicious activities on specified systems.

While there is no concrete timescale on when or if we start waging war against AI-weaponized malware, the advice is the same as any other speculative threat (and as limited)— to be proactive.

“As long as we’re still only speculating on the exact form and extent of future threats, however, it will be difficult to take specific measures,” reads the report. “But we are seeing some forward-thinking developments by governments and cybersecurity organizations.”