
Is AI to be feared? Maybe, but not for the reasons we thought.
At Black Hat Europe in December, Gavin Millard, VP Product for Tenable, pulled the cover off recent AI-driven attacks. He stated that their danger is centered in their speed and scale—not in their sophistication.
Depending on how organizations choose to respond, this could be either good news or bad news going into 2026.
Anthropic Attack Was Speed, Not Smarts
Talking about the recent, pivotal attack by Chinese hackers using Anthropic’s Claude, Millard noted that AI was used to do simple things.
“We think AI’s going to get us with this powerful polymorphic malware,” Millard said, “but the AI attack was just automation of existing tactics and techniques.”
Instead of leveraging advanced malware-creating capabilities, attackers used Claude to automate attacks against its victims, doing simple steps at an unprecedented pace: inspecting target systems, identifying high-value databases, testing vulnerabilities, and harvesting credentials.
As Millard said, “AI may have made the attack far more effective than it would have been previously, but it wasn’t anything new. It wasn’t anything different. It was just automated at scale.”
Notably, this attack leveraged agentic AI; that is, AI with the power to reason, make decisions, and even generate its own code.
But, besides engineering a malicious script in the course of its work, that didn’t seem to matter: all other functions were relatively benign.
The Nature of AI-Powered Attacks Today
A quick look at the AI-generated attacks making headlines in 2026 confirms that this is already a common trend.
Although artificial intelligence is known for its “brain” (creating highly sophisticated malware and other advanced exploits), most of the discussion still centers around it using its brawn.
Zach Church of MIT Sloan notes how “AI is being used regularly in cyberattacks to create malware, phishing campaigns, and deepfake-driven social engineering, such as fake customer service calls.”
Agentic AI agents do this sort of thing on a more sophisticated scale, applying intelligence to automation. Notes MIT Technology Review, “Agents are… significantly smarter than the kinds of bots that are typically used to hack into systems.”
According to Dmitrii Volkov, research lead at Palisade, this means that they “can look at a target and guess the best ways to penetrate it.” says Volkov, “That kind of thing is out of reach of, like, dumb scripted bots.”
Again, impressive but not beyond the mental scope of what we were dealing with before. It seems humans are always going to think of the same ways to attack, because why change what’s working?
People are always going to use weak passwords, misconfigurations will always be made, and vulnerabilities will always exist.
That’s why AI is still being used to do the simple stuff. And it’s also why we should pay attention to the little things.
Our Best Defense Against Agentic AI Attacks
In his talk, Millard highlighted several ways in which defenders can strategically outsmart agentic AI-based attacks.
Out-Speed Them
Thanks largely to AI-powered tools, the disclosure-to-exploitation gap is now roughly two hours. This is the time between when a vulnerability is made public and when attackers start exploiting it in victim environments.
Meanwhile, organizations are still only required by compliance standards to patch within 30 to 90 days.
Millard advocated offsetting this by using agentic AI across workflows to automate detection and response, thereby keeping pace.
Select Key Vulnerabilities: Attackers Do
While compliance mandates remediating all vulnerabilities with a CVSS score of 7.0 and above, that’s too many. According to Millard, nearly 60% of all vulns fall into that range.
“When I was working as a pen tester 15 years ago, you wouldn’t believe how many environments we just walked into because they were vulnerable,” he recounted. “What’s interesting is that we were working with a trusted set of about 10-15 vulnerabilities we could exploit every time. And today’s attackers still do.”
He advised that companies use agentic AI in the context of exposure management, to determine which handful of vulnerabilities had the greatest impact on the organization.
“What I need to know are those things that have been targeted, and how to fix those few things quickly,” Millard stated. “That how you become gold stamp now.”
Create Resistance with Agentic AI
Millard also emphasized the importance of increasing friction as AI-enhanced adversaries travel through attack paths.
He cited Ohm’s Law, which states that the voltage in electrical circuits is inversely proportional to the amount of resistance. In other words, the more resistance, the less voltage—or the fewer successful attacks.
To Millard, this means leveraging AI-powered platforms that use agentic agents to elevate AI cybersecurity, increasing resistance across the entire attack surface.
Conclusion
Today’s AI-generated attacks may not be daring, but they still are dangerous.
It’s true: Agentic AI, while it has the capacity to perform complex attacks, is mostly being used to do what attackers already do, only faster, better, and stronger.
But that’s good news, Millard argues. Because defenders have been fighting that fight for decades. And agentic AI can be used both ways.
As he sums up, “AI from a cybersecurity perspective is all about amplification and automation. It’s nothing novel. It’s just got to be faster, and at scale.”
____
About the author: An ardent believer in personal data privacy and the technology behind it, Katrina Thompson is a freelance writer leaning into encryption, data privacy legislation, and the intersection of information technology and human rights. She has written for Bora, Venafi, Tripwire, and many other sites.
Join our LinkedIn group Information Security Community!
















