AI is Already Enhancing the Value of Security Validation

By David Kellerman, Field CTO, Cymulate [ Join Cybersecurity Insiders ]
216

The perception of what an “AI attack” looks like is still evolving. What qualifies as an “attack” might depend on who you are and what you’re most concerned about. Some organizations are primarily concerned with attackers using AI tools to enhance their attack capabilities. Others are worried about internal users leveraging AI services and exposing them to additional risk. Still others might be most worried about an AI tool like a public-facing chatbot being used by attackers to reach sensitive internal resources. There are a wide range of ways attackers can both leverage AI tools for their own benefit and exploit the ability of solutions businesses are already using.

This means today’s organizations face the dual challenge of protecting their own AI systems while also defending against attackers using their own AI-based tools. Although AI introduces significant challenges, there are plenty of steps organizations can take to address them—and it starts with bolstering security fundamentals through consistent testing and validation practices.

Don’t Chase Trends – Focus on Fundamentals

One of the key challenges is the fact that AI involves a substantial amount of infrastructure—and that infrastructure needs to be protected. That means protecting AI solutions from exploitation doesn’t always start with protecting the solutions themselves, but rather the underlying IT infrastructure that supports them. Look at it this way: it doesn’t matter how well protected an AI solution is if an attacker can use a compromised identity to trick the system into thinking their presence is authorized. Likewise, attackers aren’t really using AI to develop “new” attack tactics yet—they’re using it to enhance their existing tactics and make them more effective.

This means protecting your own AI solutions and defending against attackers using AI tools of their own don’t require organizations to reinvent the wheel. In fact, organizations can address both problems not by racing to implement new, AI-specific security solutions, but by doubling down on fundamentals, ensuring that their existing solutions are working as intended. AI tends to get too much credit: generally speaking, an attacker doesn’t need AI to take advantage of an exposure, it just makes their attacks more effective. AI tools are making phishing emails more convincing, but phishing is still phishing. AI is accelerating credential stuffing attacks, but the tactic is still the same. Rather than worrying about what might be coming next, organizations should focus on stopping what’s happening right now.

The Emergence of CTEM Practices

The race to adopt new AI solutions means security is often an afterthought. Look at what happened amid the recent DeepSeek release: organizations that implemented the new AI model quickly found a whole host of security issues that rendered the tool uniquely vulnerable. This is a major contributor to why AI solutions are such an attractive target to attackers—organizations don’t want to risk being left behind by their competitors, which leads them to rush adoption without giving sufficient attention to security. The tactics attackers used to target DeepSeek were not fundamentally different from the attacks they’ve used to target other systems—but DeepSeek was not equipped to defend against them.

Protecting AI starts with making sure attackers can’t get their hands on sensitive data. That might include the datasets the AI model was trained on, or sensitive data being fed into AI solutions. In order to be effective, AI solutions require access to a lot of data, and some of that data will inevitably be sensitive. How can you be sure the AI won’t share your data with someone it shouldn’t? How can you be sure your AI tool can’t be exploited by attackers seeking to trick it into doing something it shouldn’t? The answer is surprisingly obvious: you test it! There is a reason organizations are adopting Continuous Threat Exposure Management (CTEM) practices at an increasing rate—whether you’re dealing with AI solutions or any other system, you need to know whether the protections you have in place are working.

Leveraging Validation and Exposure Management 

First, strong exposure management can help you gain more complete visibility across your digital environments, identifying potential exposures that attackers could exploit. It’s important to remember here that the C in CTEM stands for “continuous,” and that’s especially important when it comes to protecting AI. The AI market is evolving at an astonishing pace, which means organizations can’t rely on testing data from last quarter—they need to know what exposures exist now. Real-time visibility across all systems can help organizations identify where attackers might attempt an incursion.

Of course, it isn’t enough to know that exposures are present—you need to know which ones are actually dangerous. Just because there is a security gap doesn’t automatically mean attackers can leverage it to compromise an AI tool. Often, there are compensating controls in place to address what looks at first like a dangerous vulnerability. Security validation is an important part of CTEM, and it allows organizations to validate which exposures are actually dangerous and prioritize them accordingly. Today’s organizations often receive tens of thousands of security alerts, so the ability to prioritize them in order of their actual threat level gives security teams a significant leg up against attackers. With attackers using AI tools of their own to detect and exploit exposures faster than ever, validation is critical.

Without Validation, AI Is Dangerously Vulnerable 

By combining exposure management practices designed to identify potential exposures and security validation solutions designed to test and prioritize them, organizations can ensure their fundamentals remain strong. Remember, most attackers do not want to engage in a long, protracted, and expensive engagement—they want to be in and out as quickly as possible with as much data as possible. By ensuring that there are no easily exploitable paths to their AI tools (not to mention the rest of their systems), organizations can avoid becoming an easy target. That’s often enough to prompt attackers to move on to greener pastures.

Ad
Join our LinkedIn group Information Security Community!

No posts to display