Malware generated from Generative AI steals data

Malware spread from Smart Phones

In the past, we’ve seen malware wreak havoc by stealing data from computers, often referred to as ransomware or file-encrypting malware. These types of malicious programs typically lock up a victim’s data and demand a ransom for its release. However, the world of cyber threats is evolving rapidly, and the latest generation of malware is far more sophisticated and dangerous. What’s even more concerning is that this new breed of malware is being generated with the help of Artificial Intelligence (AI), and it comes with a level of precision and success that’s previously unheard of.

According to Ukraine’s National Cyber Incident Response Team, a new type of malware, known as LameHug, has emerged. This malware, created on the Chinese Alibaba Cloud, was written in Python and is using the Hugging Face API to communicate with Large Language Models (LLMs). These models, which are typically used for tasks like language generation and understanding, are now being weaponized to launch cyberattacks. The malware itself targets Windows systems, stealing data and, in some cases, completely wiping it from compromised devices.

To simplify, the criminals behind LameHug are using LLMs to not only craft more effective malware but also to learn how to deploy it more efficiently. The AI allows them to continuously improve their attack strategies, making it harder for security systems to detect and neutralize these threats. Once the malware is deployed on a victim’s Windows device, it executes its payload—either exfiltrating sensitive data or wiping it clean if the attackers feel their objectives haven’t been fully met.

The LameHug malware campaign reportedly began in 2023, and it’s believed to be the work of a Russian cybercrime group, possibly tied to the GRU, Russia’s military intelligence agency. The group behind the attack is suspected to be APT28, a known cyber espionage group linked to previous high-profile hacks. While previous iterations of malware spread primarily through email attachments, often concealed within PDF files or other documents, this new method is far more insidious.

What sets LameHug apart from previous cyber threats is its use of LLMs to automate and optimize attacks. Traditionally, cybercriminals relied on manual tactics to distribute malware, typically through phishing emails with malicious attachments. But with the advent of AI, they can now leverage large-scale language models to autonomously craft and launch attacks. The result is a much higher success rate, as the AI continually adapts and learns from each engagement, increasing the efficiency of each subsequent attack.

In summary, the evolution of malware in 2023 and beyond signals a new era of cybercrime where AI plays a central role. With tools like LameHug being driven by LLMs, cybercriminals now have access to a level of sophistication that makes traditional methods look outdated. This shift in tactics has serious implications for cybersecurity, as it raises the stakes in the ongoing battle to protect sensitive data from increasingly intelligent adversaries.

Join our LinkedIn group Information Security Community!

Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display