DeepSeek Suffers From The Rise Of LLMJacking
Written by Karolis Liucveikis on
According to a recent report by Sysdig, threat actors employing a new hacking technique known as LLMJacking are actively targeting DeepSeek's latest Large Language Model (LLM) and those using the model for their specific GenAI needs.
LLMJacking was first discovered by Sysdig researchers in mid-2024, and they have defined the attack as follows,
LLMjacking is a term coined by the Sysdig Threat Research Team (TRT) to describe an attacker using stolen credentials to gain access to a victim’s large language model (LLM). In essence, it is the act of hijacking an LLM…Any organization that uses cloud-hosted LLMs is at risk of LLMjacking. Attackers target LLMs for a variety of reasons, spanning from relatively harmless things like using the LLM for personal chats and image generation, to malicious code optimization and tool development and potentially even harmful activities like poisoning models or stealing sensitive information.
LLMJacking is similar to crypto-jacking since the threat actor uses resources, be it access to an LLM or hardware, to carry out specific tasks. In crypto-jacking cases, hardware like CPUs and GPUs are targeted to mine cryptocurrency for the threat actor by installing malware on hardware assets.
In LLMJacking, access to cloud-based LLM resources is used to carry out various tasks, but at the end of the day, the victim, whose LLM access was hijacked, has to pay the usage bill. Seeing that Amazon's Bedrock service consumption fees are upwards of $46,000 per day, the victim might have a nasty surprise waiting around the corner in the form of a massive bill.
When crypto jacking emerged in 2017, there was a lot of debate about the harm caused, as it was falsely believed at the time that using a small percentage of system resources to mine caused no damage. The same false belief regarding LLMJacking is not remotely possible, seeing the costs involved to the victim.
There are other reasons why LLMJacking should be taken seriously; Sysdig posits the following scenarios:
- Poisoning data: An attacker can poison your data by intentionally feeding incorrect information into your model so legitimate requests are given incorrect answers. This has not been reported publicly but is a real risk that could damage your business's operations or reputation.
- Stealing sensitive information: An attacker can steal sensitive or proprietary information. This type of LLM attack has not yet been reported publicly, but it is known that some organizations are using LLMs so employees can query large amounts of internal data faster. By asking the right questions, an attacker can get the sensitive information they desire to engage in further illicit activities.
- Conducting nefarious activities: Sysdig TRT has identified attackers conducting a variety of nefarious activities with access to victim LLMs. The attacker can use their free access to your LLM for many reasons, such as creating social engineering drafts, developing or modifying malicious code or tools, or otherwise engaging in behavior that goes against the ethical codes of conduct that may have gotten their access banned elsewhere.
DeepSeek LLMJacking
DeepSeek security concerns are mounting, as last week it was reported that a database storing sensitive information was easily accessible by a potential threat actor due to poor configurations. DeepSeek's customer base may be footing the bill for illicit use of DeepSeek's latest LLM.
Sysdig discovered that soon after the company released its DeepSeek-V3 model, it only took LLMJackers a few days to obtain stolen access. Similarly, DeepSeek-R1 was released on January 20, and attackers had it in their fingers in the proverbial pie the very next day.
To carry out a successful attack, threat actors need more than just stolen credentials; they also need to use scripts to verify that these do, in fact, provide access to a desired model. This is followed by the need to incorporate that stolen authentication information into an "OAI" reverse proxy (ORP). ORPs effectively bridge the user and the LLM, providing a layer of operational security.
Regarding ORPs, Dark Reading wrote,
The apparent forefather of ORPs, from which the name derives, was published on April 11, 2023. It has since been forked and configured on numerous occasions to incorporate new stealth features. Newer versions have incorporated password protections and obfuscation mechanisms — like making its website illegible until users disable CSS in their browsers — and eliminated prompt logging, covering up attackers' footsteps as they use the models. Proxies are further protected by Cloudflare tunnels, which generate random and temporary domains to shield the ORPs' actual virtual private server (VPS) or IP addresses.
Sysdig researchers soon found that DeepSeek support had been added to an ORP used in attacks that had previously targeted other cloud-based LLM services. They also discovered that ORPs had been populated with DeepSeek API keys; one such ORP had 55 DeepSeek API keys at the time of discovery.
Seeing that credential theft, which allows access to the LLM services, is a crucial step in the attack, so defending against AI service account compromise is the primary form of mitigation. This primarily involves securing access keys, implementing strong identity management, monitoring for threats, and ensuring the least privileged access.
LLMjacking attacks are evolving, and so are the motivations driving them. Ultimately, attackers will continue to seek access to LLMs and find new malicious uses. It is highly recommended that users and organizations be prepared to detect and defend against LLMJacking attacks, as it is more than just a massive bill you will have to contend with.
▼ Show Discussion