
In a significant and somewhat unprecedented move, both the United States and China have reportedly taken steps to restrict the use of Anthropic’s artificial intelligence technology within their government systems. The decision reflects growing global concern over the security implications of advanced AI tools, particularly when deployed in sensitive national infrastructure.
According to emerging reports, the United States government—under the leadership of President Donald Trump—has initiated measures to ban the use of Anthropic’s AI systems across key federal operations. At the same time, China, led by President Xi Jinping, has already begun implementing similar restrictions. While both nations often diverge on technology policy, this parallel action signals a shared anxiety about potential cybersecurity risks linked to third-party AI platforms.
In the United States, the Pentagon has taken a leading role in enforcing this directive. The Department of Defense has issued formal instructions to various national infrastructure agencies, especially those connected to military and defense systems, to discontinue the use of Anthropic technologies within a 180-day timeframe. This includes critical sectors such as power grids, nuclear facilities, and ballistic missile defense systems—areas where even minor vulnerabilities could have severe consequences.
The core concern driving this decision is the possibility of supply chain risks associated with Anthropic’s AI models, particularly its Claude system. Officials worry that integrating such technology into sensitive environments could expose the country to hidden vulnerabilities, whether through unauthorized access, data leaks, or manipulation of automated systems. By removing these tools, the Pentagon aims to strengthen control over cybersecurity operations and reduce reliance on external AI providers in matters of national defense.
On the other hand, Anthropic says that it is sticking to the red line and is all set to retaliate such decisions in a legal way. But needs time to perceive these developments in a scientific way and resolve it with no bias towards any nation or issue.
Meanwhile, China has reportedly already restricted the use of Anthropic tools over the past two months. Although details from Beijing remain limited, the move appears to stem from similar concerns about data security and the potential misuse of AI systems. The Chinese government has been increasingly cautious about foreign-developed technologies operating within its borders, especially in sectors tied to national security and surveillance.
Another major issue raised by U.S. defense officials is the theoretical risk that advanced AI models could be exploited for large-scale surveillance or even to interfere with critical infrastructure. There are fears that such systems, if compromised, might enable unauthorized monitoring of citizens or disrupt automated processes in high-stakes environments, including nuclear systems. While these risks remain largely speculative, they are taken seriously given the rapid evolution and growing capabilities of AI technologies.
Overall, the coordinated caution shown by both the United States and China highlights a broader global trend: governments are becoming increasingly wary of integrating powerful AI systems into essential infrastructure without fully understanding the long-term risks. As artificial intelligence continues to advance, ensuring its safe and secure deployment will likely remain a top priority for nations around the world including USA.
Join our LinkedIn group Information Security Community!
















