
Organizations are rapidly expanding their use of artificial intelligence, but many are not testing those systems at the same pace, according to new research from HackerOne. The report identifies what the company calls an AI security gap, describing a disconnect between AI adoption and formal security testing.
The study finds that AI use has grown significantly over the past year. Ninety four percent of respondents report operating more AI or machine learning systems than they did a year ago. Despite that growth, testing coverage remains uneven. Only 66% of organizations say they formally test 61% or more of their AI or ML systems, creating a 28 point AI security gap.
Organizations operating within that gap appear more likely to encounter security issues tied to AI. According to the survey, 89% of security leaders at organizations with limited testing coverage reported AI related attacks or vulnerabilities during the past year.
The report also highlights the financial impact of inadequate testing. Security leaders working in environments where AI testing coverage is limited report 70% higher annual remediation costs compared with organizations that test nearly all of their AI systems.
“AI systems are dynamic, evolving with every model update, integration, and data connection and the same is true of modern digital systems overall,” said Kara Sprague, CEO of HackerOne. “As systems become more interconnected and adaptive, risk evolves in real time. Periodic testing assumed stability. Today’s reality requires continuous testing so leaders can detect change, identify what’s exploitable, and mitigate risk before it materializes.”
The findings are based on a survey of more than 300 security leaders across six countries and highlight structural trends shaping AI risk exposure:
• AI risk compounds as deployments scale: Organizations that expanded from a small AI footprint of two systems to a larger footprint of eight to 10 systems experienced 82% more attack types reported and 2.4 times higher attack costs. As AI systems integrate with APIs, enterprise applications and internal data sources, exposure can increase significantly when testing practices do not expand alongside deployment.
• Testing coverage is not keeping pace: While 94% of organizations added AI or ML systems in the past year, only 66% say they formally test 61% or more of their systems. Across all respondents, 84% experienced at least one AI related attack or vulnerability in the past 12 months. Organizations testing 91% or more of their AI systems are 16% less likely to report an AI related incident than organizations with lower testing coverage.
• Shadow AI remains a material blind spot: Only 55% of organizations report that they fully track unsanctioned or “shadow” AI usage. When employees independently adopt AI tools in their workflows, organizations may lose visibility into how those systems interact with enterprise applications and data. This unmanaged use can expand the attack surface and introduce governance and compliance risks.
“Organizations keep adding AI systems without thinking about the blast radius,” said Luke Stephens, security researcher. “These aren’t sandboxed toys. They’re hooked into real data, real APIs, real decision-making. When something goes wrong, it doesn’t stay contained. The cost data in this report reflects what I’ve been seeing in the wild: the longer you wait to test, the more expensive it gets to fix.”
As artificial intelligence systems move deeper into production environments, oversight is becoming a growing priority for leadership teams and regulators. Boards and executives are increasingly seeking evidence that AI systems are properly monitored and tested.
The report concludes that addressing the AI security gap will require organizations to embed continuous security testing into how AI systems are developed, deployed and governed. As AI adoption continues to grow, security practices will need to evolve to ensure organizations maintain visibility into emerging risks.
Join our LinkedIn group Information Security Community!
















