Cybercrime Cartels In South East Asia Leveraging AI
Written by Karolis Liucveikis on
According to a recent report by the United Nations Office on Drugs and Crime (UNODC), a large and diverse set of malicious AI tools have been developed across Southeast Asia to supplement the needs of cybercrime cartels across the region.
Tools to generate convincing deep fakes appear to be the most popular, with UNODC recording an exponential increase in Telegram channel mentions, with those channels acting as marketplaces for said tools.
UNODC stated, highlighting the problem,
With the growing public accessibility of generative AI tools, this technology has become a powerful force multiplier for criminal activities such as identity theft, fraud, data privacy violations, and intellectual property breaches, as well as threats to national security. The increased availability of open-source tools further amplifies the risk, enabling a wider range of illicit activities, including biometric identification fraud and the creation of AI-assisted sextortion and other fraudulent content.
The report tracks several popular AI attack vectors used to carry out the following types of attacks: using generative AI to create phishing messages in multiple languages, chatbots that manipulate victims, social media disinformation en masse, and fake documents for bypassing know-your-customer (KYC) checks.
Further, to supplement the use of generative AI tools, an emphasis has seemingly been placed on using polymorphic malware capable of evading security software and identifying ideal targets. Polymorphic malware can be defined as malware that constantly changes its identifiable features to avoid detection and make it unrecognizable to more traditional detection methods.
In the realm of deepfake attacks, activity and interest truly exploded. Based on UNODC's internal research from February to June 2024, a 600% increase in mentions of deepfakes in cybercriminal Telegram channels and underground forums was tracked.
Research from third parties suggests that 2022 and 2023 were marque years for deepfake crimes, as such crimes rose more than 1,500% compared with the year prior. Further, face swap injections, often used to circumvent online KYC verification checks, rose 704% in the second half of the year compared with the first.
To this extent, UNODC researchers provided the following evidence in support of their assertion and internal research findings,
These findings are consistent with other recent research. For instance, according to iProov’s Threat Intelligence Report 2024, statistics show face swap injection attacks increased by a staggering 704 percent in the second half of 2023 compared to the first half. Another analysis by Point Predictive of over 10 million instant messages from the top 25 Telegram fraud forums between 2020 - 2024 revealed a massive spike in related keyword mentions, surging to over 37,000 messages in a March 2024—a 900 percent increase over the previous month.
To make matters worse, there is increasing evidence that jailbroken AI tools that use large language models are being used to develop malware and in data processing to enhance victim profiling efficiency.
Deepfake technology has also taken a leap forward recently in the region, with illicit products sold via Telegram now advertising several new capabilities, including an integrated audio deep fake or so-called voice swap feature. Some "vendors" also offer same-day on-site installation across several South East Asian countries.
Deep Fake Attacks in the Wild
To better see the real-world implications of such a rise in deep fake attacks, UNODC detailed several real-world attacks. One such incident is one of the highest-value real-time deepfake incidents reported. Arup, a British engineering firm, confirmed in May 2024 that its Hong Kong office lost 25.6 million USD following a coordinated deep fake attack.
In summary, the employee involved received a phishing email in January, supposedly from the company's Chief Financial Officer (CFO) in London, instructing him to facilitate a secret transaction. The employee later joined a video conference where the CFO and several participants, believed to be senior management, were deep fake recreations.
In another more recent incident, police in Lamphun, Thailand, issued a public warning about cyber crime syndicates using deep fakes to impersonate police officers for financial gain. In one such campaign, scammers manipulated an image of a female officer who ran a popular social media page about her transition from accounting to law enforcement to create realistic video calls.
The deep fake convincingly mimicked the officer's voice and appearance, intending to trick victims into believing they were speaking with a legitimate officer from the Lamphun City Police.
Deep fake attacks with political motives also seem to be trending upward. In July 2024, a deepfake video emerged showing one Southeast Asian head of state using what appeared to be an illicit substance, stirring up controversy just days before announcing a significant policy change to address online gambling and related criminality in that country.
Earlier in April, an altered audio clip surfaced allegedly featuring the same head of state authorizing the use of force against another neighboring state.
In December 2023, a deep fake video emerged showing Singapore's Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong falsely promoting cryptocurrency and investment products, misleading the public and illustrating the potential for deepfakes to facilitate complex fraudulent activities.
UNODC has also detailed how deep fakes are used to facilitate sextortion campaigns, particularly in Vietnam and other countries forming the Golden Triangle, primarily using two separate techniques.
Researchers went on to state,
The first strategy involves stealing images from victims’ social media profiles and processing the images through deepfake software to create explicit videos or images, which are subsequently used to extort victims via major social media platforms. The second approach involves the creation of fake social media profiles to befriend and build trust with victims prior to convincing them to share explicit images or participate in recorded video calls, during which deepfake technology is used to manipulate the victim. The content captured in these interactions is then used to extort the victim.
Victims often feel that their only option is to comply and incur significant financial losses in the hope that they will be spared greater shame. Sadly, many have reported being repeatedly extorted and victimized following the initial transfer of funds, with a growing number of suicide incidents connected to such incidents.
▼ Show Discussion