
In today’s rapidly evolving cyber landscape, even those with minimal technical expertise can now launch large-scale ransomware attacks. This shift is largely due to the increasing use of Artificial Intelligence (AI) technologies in creating and spreading file-encrypting malware.
A recent analysis by the American AI startup Anthropic—a key player behind the development of various Large Language Models (LLMs), including Claude—has highlighted a concerning trend. Cybercriminals are now exploiting machine learning models like Claude AI running on Kali Linux to conduct sophisticated attacks. Dubbed the “GTG-2002 operation,” this attack involved hackers using AI-powered models to scan VPN endpoints, gathering sensitive data such as user credentials and other valuable information. In an attempt to conceal their malicious activities, the attackers disguised themselves as legitimate Microsoft tools.
The fact that these cyberattacks were disguised as trusted technology companies adds a layer of complexity to the issue. As a result, 17 organizations were compromised, with cybercriminals demanding Bitcoin payments ranging from $75,000 to $500,000 in exchange for restoring access to the encrypted files.
The use of AI tools in launching ransomware campaigns marks a significant shift in the tactics of cybercriminals. This development is a clear indication that the world of cybercrime is not only evolving but accelerating. Novices—individuals who may have limited or no prior experience in cybersecurity—can now leverage AI technology to execute devastating attacks. The ease with which such attacks can be carried out is a worrying trend that’s making it harder to defend against cyber threats.
In addition to ransomware, there is growing concern that the same AI technologies could soon be used to exploit the vast network of connected devices in the Internet of Things (IoT). This poses a serious threat to over 2.7 billion IoT devices that are vulnerable to cyberattacks in the coming weeks.
This issue is not unique to Anthropic. Just a few weeks ago, OpenAI, the subsidiary of Microsoft responsible for the popular ChatGPT platform, admitted that cybercriminals were using the platform to develop malware and launch cyberattack campaigns. Anthropic’s findings mirror this, and it’s likely that other similar cases will surface in the near future.
However, it’s important to note for the readers of our Cybersecurity Insiders that AI technology itself should not be vilified. The issue lies not in the technology, but in the minds of those who misuse it for malicious purposes. Much like any tool, AI can be used for both good and bad. Consider the example of a car: while it is designed for transportation, it can also be misused for criminal activities such as trafficking. Similarly, AI can either enhance cybersecurity or be exploited by criminals to further their objectives.
Join our LinkedIn group Information Security Community!














