Threat Actors Seen Deploying AI-Written Malware
Written by Karolis Liucveikis on
In HP Wolf Security's Threat Insights Report September 2024, security researchers detailed a targeted attack in which the threat actors used Generative Artificial Intelligence (AI) to write malware code. This trend has grown since AI tools like ChatGPT were released to the public.
In June 2024, security researchers discovered a French email containing a simple HTML file. The file masqueraded as an invoice and, if opened, would prompt the user to enter a password.
Initial analysis of the HTML revealed that it was the vehicle for delivering malicious JavaScript code to the victim in what is commonly known as an HTML smuggling attack. These attacks function as a type of drive-by-download attack where malicious code is "smuggled" in HTML attachments or webpages. The malicious code will be downloaded to the victim's machine when the HTML file is opened.
This HTML smuggling attack differed in several ways from what has come to be expected in such attacks. Firstly, the payload stored inside the HTML file was not encrypted inside an archive. Instead, the file was encrypted within the JavaScript code itself. The encrypted JavaScript file was encrypted with AES and was done without error, meaning the file could only be decrypted if a password was entered.
Researchers were able to brute force the password, as they did not have access to the email body, where it is assumed the password would have been given to the victim to kick off the infection chain.
The second notable difference was stated by researchers saying,
The decrypted archive contains a VBScript file. When run, the infection chain starts and ultimately deploys AsyncRAT, a remote access trojan (RAT). The VBScript writes various variables to the Windows Registry, which are reused later in the chain. A JavaScript file dropped into the user directory is then run by a scheduled task. This script reads the first variable, a PowerShell script, from the Registry and injects it into a newly started PowerShell process. The PowerShell script then makes use of the other Registry variables and runs two more executables, which start the malware payload after injecting it into a legitimate process.
The malware payload, AsyncRAT, is an open-source remote access trojan that is easy for threat actors to get their hands on. Initially, it was used non-maliciously for educational purposes in training red and blue cybersecurity teams. Soon, malicious threat actors began using it to gain remote access to a victim's machine and either looked to steal sensitive data or deliver other malware payloads.
The final difference that makes this attack stand out was discovered when the VBScript was analyzed. It was discovered that the JavaScript was not obfuscated. The attacker even left comments throughout the code, describing what each line does, even for simple functions.
Genuine code comments in malware are rare because attackers want to make their malware as challenging to understand as possible. This led researchers to believe the malicious code was generated by an AI tool.
Researchers stated,
Based on the scripts’ structure, consistent comments for each function and the choice of function names and variables, we think it’s highly likely that the attacker used GenAI to develop these scripts. The activity shows how GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.
Growing Usage of AI to Develop Malware
The above attack is by no means an isolated incident. Threat actors using AI tools to develop malware better are on the rise despite safeguards implemented by tech companies to prevent such abuse. In some instances, to circumnavigate these safeguards, threat actors have developed their own AI tools based on open-source large language models with no safeguards.
In January 2024, the United Kingdom's National Cyber Security Centre (NCSC) warned that:
- AI will likely intensify cyberattacks in the next two years, particularly through the evolution of current tactics.
- Both skilled and less skilled cyber threat actors, including state and non-state entities, are currently utilizing AI.
- AI enhances reconnaissance and social engineering, making them more effective and difficult to detect.
- Sophisticated AI use in cyber operations will mainly be limited to actors with access to quality data, expertise, and resources until 2025.
- AI will make cyberattacks against the UK more impactful by enabling faster, more effective data analysis and training of AI models.
- AI lowers entry barriers for novice cybercriminals, contributing to the global ransomware threat.
- By 2025, the commoditization of AI capabilities will likely expand access to advanced tools for both cyber criminals and state actors.
To highlight these concerns, in April 2024, reports of a malicious PowerShell script that was pushing info-stealing malware to victims began. The script appeared to be written by an AI tool, as researchers used AI prompting tools to create a similar-looking script despite the guard rails implemented by the AI tool in question.
These are just two instances of many where the specter of AI involvement in malware development has popped up, and it will continue to do so for the near to medium term.
▼ Show Discussion