
Chinese state-linked cyber operations are once again under scrutiny following allegations that the advanced persistent threat (APT) group known as APT31 leveraged Google’s Gemini artificial intelligence platform to conduct cyberattacks against U.S. businesses.
The accusation adds to existing concerns surrounding so-called “LLM distillation attacks,” in which malicious actors repeatedly query large language models (LLMs) to harvest input-output pairs. These harvested responses can then be used to train rival AI systems, effectively extracting knowledge from proprietary models.
According to Google Threat Intelligence, many of the operations attributed to APT31 were at least partially successful. The group reportedly carried out what experts describe as “semi-autonomous offensive operations,” meaning that AI tools may have been used to assist with reconnaissance, vulnerability identification, and payload generation, while human operators maintained strategic oversight. This hybrid model of automation and human direction reflects a growing trend in cyber warfare, where AI augments traditional hacking techniques rather than fully replacing them.
A report first published by The Register indicated that the Beijing-based group has been targeting large U.S. enterprises since 2024. APT31 is also known by several other aliases, including Violet Typhoon, Zirconium, and Judgment Panda—names assigned by various cybersecurity firms tracking its activities. The group has long been associated with espionage campaigns targeting political institutions, corporations, and critical infrastructure.
The recent allegations further claim that Chinese operators employed a red-teaming framework known as HexStrike to identify and exploit weaknesses in American organizations. Reported tactics include remote code execution exploits, web application firewall bypass techniques, and SQL injection attacks. These are well-established methods in cyber intrusion campaigns, but the integration of AI tools may have accelerated vulnerability discovery and exploitation processes.
Notably, Mandiant—the cybersecurity firm owned by Google—previously accused Beijing-linked actors of using Anthropic’s Claude AI system to facilitate automated cyber operations. Now, similar claims are being directed at Gemini. In both instances, major U.S.-based AI platforms have become focal points in broader geopolitical tensions over emerging technologies and their potential misuse.
In a related development, Google has issued warnings that certain China-linked groups are attempting to recruit employees within Western companies. These efforts allegedly involve offering lucrative financial incentives or creating so-called “honeytrap” scenarios to cultivate insider threats. In some cases, trained operatives may even seek employment directly within targeted organizations to gain internal access.
Together, these allegations underscore the evolving intersection of artificial intelligence and cyber warfare. As AI systems become more capable and widely accessible, governments and corporations face mounting pressure to strengthen safeguards, monitor misuse, and address insider vulnerabilities in an increasingly complex threat landscape.
Join our LinkedIn group Information Security Community!














