For most organizations, this is the year AI becomes infrastructure. Agents now execute actions autonomously: modifying records, creating accounts, and pushing code through API calls that complete before any human reviews them. That makes every AI deployment a security risk, whether organizations treat it as one or not.
The security stacks found in most organizations today were built for a different world: one where humans were the only actors, processes were deterministic, data stayed in recognizable forms, and trust was verified at the browser. That world no longer exists.
This report is based on a comprehensive survey of 1,253 cybersecurity professionals, exploring the ways organizations are securing AI, with consideration for governance, visibility, data protection, and agent control.
AI-driven risk is expanding from human misuse to machine autonomy, and the controls are still working to address the first challenge. The survey points to four architectural priorities: continuous visibility into all AI activity including agent and M2M traffic, inline enforcement without creating friction and latency, semantic-aware data controls that evaluate meaning rather than patterns, and extending zero trust to non-human identities (NHIs). The chapters that follow measure how mature most organizations are, and how to close the execution gap.














