Fortra 2026 Predictions

Title card for Fortra 2026 cybersecurity predictions

The security landscape is accelerating into 2026. AI is reshaping every layer of offense and defense, collapsing long-standing distinctions between insiders and systems, human attackers and automated ones, corporate risk and personal risk. What used to be edge-case speculation is now operational reality: AI agents acting with system-level privileges, criminal marketplaces running like SaaS platforms, token theft eclipsing phishing, and nation-state tactics bleeding directly into commercial targets.

These predictions highlight the shifts security leaders can’t ignore. They’re not incremental trends; they’re the structural breaks that will define the next era of cyber defense.

AI/Insider Threats

Enterprises Will Start Treating AI Systems as Insider Threats

As agents gain system-level permissions to act across email, file storage, and identity platforms, companies will need to monitor machine behavior for privilege misuse, data leakage, etc. The shift happens when organizations realize their AI assistants have broader access than most employees and operate outside traditional user behavior analytics.

AI agents need cross-functional access to be useful, they operate 24/7, and they make thousands of decisions per day that no human reviews. The first time an AI agent gets compromised through prompt injection or a supply chain attack and starts quietly exfiltrating customer data under the guise of “”helping users,”” organizations will realize they built privileged access with no monitoring.  Josh Taylor, Lead Security Analyst

AI/Legal

The First “”AI Liability”” Lawsuit

By Q2 2026, we will likely see a company sue an AI-assisted system after the AI makes a decision that causes measurable business harm such as leaked confidential information, violated a regulatory requirement, or made a commitment the company can’t honor.

AI systems are moving from advisory roles to decision-making roles. A lawsuit will likely involve an AI agent that had access to privileged information and disclosed it inappropriately, or an AI assistant that shares proprietary data.  This will force the industry to answer questions nobody wants to ask: Who is liable when an AI you gave permission to act on your behalf and does something harmful? The vendor? The company? The AI itself?” Josh Taylor, Lead Security Analyst

AI/Defenses

AI-Augmented Threats Will Overwhelm Traditional Defenses – Threat actors will deploy AI to craft adaptive, scalable, and personalized attacks that bypass static defenses and exploit human trust. Enterprises must counter this with dynamic, behavior-based security models and prepare for threats that learn and evolve in real time. Josh Taylor, Lead Security Analyst

AI/Extortion Scams

Hyper-personalized extortion scams driven by AI. For example, someone might receive a highly customized email stating that their Tesla has been hacked. The attacker will send it over a cliff the next time their loved ones are in the car unless the victim pays $X in bitcoin. The attacker would mention the color and model of the car, as well as the name of the intended victim’s family members. The threat might include details such as a place they often visit or a road the victim often drives on.

AI will help customize the lure, based on inputs from the victim’s social media + breach data. The AI would even calculate the right amount to ask for…a high school student might be asked for $300 while a CEO might receive a $250k demand to keep their family safe. The lure would be completely different for someone with an old-school car. For example, I drive a 2001 Porsche, so taking control of my car is off the table. The lure they use on me might be a threat to poison my 2 cats, for example.

In other words, AI will figure out what is dear to the victim, determine a threshold where the victim is more likely to pay vs. contact police, then craft a unique threat tailored to the victim. John Wilson, Senior Fellow, Threat Research

AI/SOCs

Augmented SOCs Become the First Line of Defense

By 2026, AI will evolve into the first responder for cyber defense teams. We’re quickly entering an era where AI will handle the majority of incident triage and containment in seconds, allowing human defenders to focus on strategy, attack forensics and threat hunting. John Grancarich, Chief Strategy Officer

AI/SOCs

AI Will Become a Core Operating Layer in the SOC – By the end of this year AI will break into Tier-1 SOC functions including alert triage, correlation, and containment, allowing analysts to focus on strategic threat hunting and tuning. Security teams will gain speed, scale, and precision by embedding autonomous workflows across the entire incident response lifecycle. Josh Taylor, Lead Security Analyst

Breaches

In 2026, we are likely to witness several high-profile breaches where initial access is achieved through the theft and resale of authentication cookies and cloud tokens. This trend is driven by the continued proliferation and professionalisation of underground marketplaces that trade in such credentials (such as Russian Market). Stan Hegt, Manager & Security Specialist

Brand Protection 

Brand Protection Expands the Attack Surface

The attack surface now includes an organization’s brand, its executives, and its online reputation. By 2026, protecting trust beyond the network – across the open web, social platforms and dark web – will become as critical as protecting the network itself. John Grancarich, Chief Strategy Officer

Cyber/Critical Infrastructure

Attacks on Critical Infrastructure Will Accelerate – Nation-state and criminal actors will target energy, healthcare, and transportation systems with cyber-physical impacts, turning outages and disruptions into strategic weapons. Enterprises in these sectors must treat cybersecurity as a safety imperative and plan for worst-case operational scenarios. Josh Taylor, Lead Security Analyst

Cyber

The Line Between APTs and Criminal Gangs Will Disappear – State-backed groups and cybercriminal gangs will blend tactics, share infrastructure, and obscure attribution, creating hybrid threats that defy traditional classifications. Defenders will need to focus on behavior, intent, and impact rather than relying on actor labeling. Josh Taylor, Lead Security Analyst

Cyber

Nation-State Operations Will Expand to Target Commercial Enterprises – Advanced persistent threat actors will increasingly target private-sector companies for economic disruption, IP theft, and espionage aligned with geopolitical goals. Enterprises must adopt nation-state-grade defenses and treat geopolitical risk as part of their cyber threat model. Josh Taylor, Lead Security Analyst

Channel

With the increased complexity of the cybersecurity threat landscape, 2026 will see added emphasis and reliance on the channel.  Specifically the need for Managed Services Providers (MSP’s) or Managed Security Services Providers (MSSP’s) to meet the needs of resource and overhead constrained customers.  I see an increase in this business. Faraz Siraj, Vice President, Global Channels and Alliances

DSPM

Data Becomes the Security Perimeter

Data is often the successful attacker’s prize. In 2026, DSPM will evolve from visibility into real-time enforcement, automatically securing sensitive data no matter where it lives and forming the backbone of Zero Trust architectures. John Grancarich, Chief Strategy Officer

Fraud

A complete end-to-end Fraud-as-a-Service platform. Just as we’ve seen consolidation in the cybersecurity industry, Fraud-as-a-Service operators will consolidate every phase of the fraud chain into a unified platform. The platform will have a fraud “app store”. For example, someone might have a really good spamming engine. Someone else might have lists of potential victims’ email and other details. Another provider might offer money laundering services. Using the platform, a would-be cybercriminal would just sign up, click which features they want, and without any real technical knowledge they’d be able to run any type of scam. It would be like opening up an Etsy store, but for fraudsters. John Wilson, Senior Fellow, Threat Research

Governance

Governments Will Shift From Encouraging AI Innovation to Imposing Guardrails on Corporate Deployment – By 2026, the regulatory narrative around AI will shift from innovation enablement to accountability enforcement, as governments recognize that corporate adoption has outpaced governance. With the novelty of AI worn off and widespread misuse becoming clear, regulators will begin imposing strict requirements for transparency, auditability, and explainability in enterprise AI deployments. Josh Taylor, Lead Security Analyst

State-Sponsored Attacks

Notably, marketplaces that once primarily catered to financially motivated cybercriminals will increasingly attract nation-state actors seeking to purchase initial access rather than develop bespoke intrusion capabilities. This blurring of lines between criminal and state-sponsored activity will make attribution and defense even more complex in the year ahead. Stan Hegt, Manager & Security Specialist

Zero Trust

As organizations accelerate their transition to cloud-centric architectures, many are also implementing “zero trust” models in name only, leaving significant gaps in device and session management. The combination of these flawed implementations and the thriving ecosystem for stolen tokens creates a toxic mix of opportunity and exposure. Stan Hegt, Manager & Security Specialist

 

Join our LinkedIn group Information Security Community!

No posts to display