
When nearly half the chief information officers in Logicalis’s April 2026 survey said they wish AI had “never been invented,” they were not posing as Luddites. They were describing a queue. TechTarget’s news brief on AI security pulls three threads from the past two weeks (the Logicalis CIO findings, the Q1 2026 bank earnings calls, and a freshly patched Excel-Copilot agent flaw) that all describe the same shape: AI is adding work to security teams faster than those teams are adding capacity to absorb it.
- More than one-third of organizations in the Logicalis sample report reduced breach detection capabilities and slower incident response times since AI rollout accelerated.
- JPMorgan Chase, Morgan Stanley, Goldman Sachs, and BNY all flagged AI risk on Q1 2026 earnings calls; 80% of banking executives now fold cybersecurity into their AI budgets per KPMG’s AI Quarterly Pulse Survey.
- CVE-2026-26144, an Excel cross-site scripting flaw that exploits Copilot Agent mode, lets attackers exfiltrate data with no user interaction and no visible prompt, reframing what a “low-severity XSS” can mean once an AI agent has the user’s permissions.
Inside the Logicalis Queue: A Workforce That Can’t Triage Fast Enough
Logicalis, the global managed-services and IT-solutions provider, published its findings as part of the firm’s recurring CIO Research. More than a quarter of the surveyed CIOs identified AI as a significant source of risk, ranking it alongside malware and ransomware. The same respondents named four pressures piling on security teams at once: employee misuse of AI tools, limited governance, shadow AI deployments that bypass procurement, and application sprawl. Bob Bailkoski, Logicalis Group’s chief executive, framed the operational squeeze: “AI is a powerful force in cybersecurity, but without the right skills and governance, it can create more vulnerabilities than protection. CIOs have the challenging task of defending their organizations against AI-driven threats, but also from the risks posed by the very AI tools meant to safeguard them.” The “wish it had never been invented” share is a measurement of how far operational reality has drifted from deployment pace.
The same TechTarget brief catalogs a second pressure point. Big banks treated their Q1 2026 earnings calls as a venue to address AI security worries directly. Anthropic’s Claude Mythos Preview, the frontier model that surfaced earlier this year, has already uncovered thousands of critical flaws in browsers and operating systems, raising the question of who patches all of them. The 80% AI-cybersecurity budget-overlap figure from KPMG is the financial-services answer to the Logicalis attrition signal, with money tracking the load shift while headcount lags.
Why the AI Security Gap Widens Faster Than Patching Closes It
The most operationally consequential item in the brief is the Excel-Copilot CVE-2026-26144, because it illustrates the mechanism the Logicalis numbers describe. The XSS class itself is decades old; the impact profile is new. Researchers documented the flaw as letting an attacker embed a malicious payload in an Excel file that, when opened with Copilot Agent mode active, triggers data exfiltration to attacker-controlled endpoints without any user interaction and without a visible prompt. The agent’s permissions, not the original vulnerability class, set the blast radius. What Logicalis under-emphasizes in its framing of “AI risk” is precisely this point: the legacy taxonomy of XSS, SQL injection, or input-validation flaw no longer predicts what an exploited bug can do once an AI agent inherits the user’s authorization context and the breach window collapses from a phishing chain to a single attachment open.
That is the gap-widening dynamic. A traditional XSS triage at 9:00 AM on Monday assumes a user clicks something, a session token leaks, and the security operations team has hours to spot the anomaly in identity telemetry. A Copilot-amplified XSS at 9:00 AM on Monday assumes the agent executes the exfiltration silently and the team has nothing in identity telemetry to catch because no human interaction occurred. The reduced detection capability the Logicalis CIOs report is not a generic staffing complaint; it is the operationally specific consequence of agent-mediated exploitation outrunning detection logic written for human-mediated exploitation. Every old vulnerability becomes, in this framing, a new AI vulnerability the moment an agent has read-write scope over the affected workflow.
Three Moves to Close the Defender Capacity Gap on AI Security
The sequencing matters: governance frames what is permitted, detection logic narrows what is escalated, and headcount or tooling absorbs what gets escalated. Skipping the first step means the second and third drown.
Audit agent permissions before patching agent-class CVEs. The fix for CVE-2026-26144 is the Microsoft patch; the durable mitigation is a tenant-level inventory of which AI agents hold which Microsoft Graph permissions and which file types those agents can read or modify without explicit user consent. Closing the bug closes one variant; restricting agent scope closes the class.
Rewrite detection rules to assume zero user interaction. The Logicalis “reduced detection capability” finding traces directly to SIEM and EDR logic that anchors on user-initiated events. Add agent-action correlation rules that fire on data egress originating from an authenticated AI session with no prior keystroke, mouse, or browser-foreground signal in the preceding sixty seconds. Volume will spike before it settles; treat that as the cost of the new baseline.
Stand up shadow-AI discovery as a continuous control, not a quarterly audit. The Logicalis CIOs name shadow AI and app sprawl as two of the four pressures, and the bank earnings disclosures show the same instinct (the 80% AI-cybersecurity budget overlap is partly a discovery budget). Inventory which AI tools are reaching production data through approved and unapproved paths weekly, and feed that inventory into the access-review cadence so the agent permissions audit in the first move stays current. The CIOs who told Logicalis they wish AI had never been invented were not asking for a rollback; they were asking for the time and visibility to actually defend what is being shipped, and AI security as a discipline closes that gap by making agent scope, detection logic, and shadow-AI discovery the operational floor.
Join our LinkedIn group Information Security Community!
















