How the AI era has fundamentally altered the cyberthreat landscape

By Matt Lindley
1218

By Matt Lindley, COO and CISO at NINJIO

The AI mania over the past year has been illuminating in many ways. Despite the emergence of exciting new technology such as generative AI tools that can produce a vast library of creative content on demand, the dark side of AI has also been on display. Large language models (LLMs) like OpenAIā€™s GPT-4 have a habit of producing falsehoods that users canā€™t distinguish from facts. The massive data collection required to run generative AI models has subjected their creators to lawsuits. From reliability to transparency, thereā€™s no shortage of AI challenges.

Among the most urgent of these challenges is the threat of AI-powered cyberattacks. From the use of AI password hackers to LLM-generated phishing content to rapidly improving deepfake technology that can fool employees more easily than ever, the emergence of AI will have sweeping consequences for the state of your companyā€™s cybersecurity. While the emergence of AI cyberthreats has led to the creation of AI-based detection solutions and other tools for preventing and mitigating cyberattacks, company leaders shouldnā€™t sit on their hands and wait to see what happens in this digital arms race. The time for action is now.

Cybercriminals are already using the technology to augment one of their most reliable and destructive tactics: social engineering. As we plunge into the AI era, companies need to resist this threat by prioritizing personalized and engaging cybersecurity awareness training (CSAT) across the organization. Only then will their people be up to the challenge.

Assessing the multiplying risks of AI cyberattacks

At a time when cyberattacks are becoming more frequent and financially devastating, AI is a force multiplier that puts companies at even greater risk. Cybercriminals are using AI to develop purpose-built malware, produce more convincing phishing content, and improve their ability to launch other types of social engineering attacks. A 2023 survey found that three-quarters of senior cybersecurity experts report rising attacks over the past year ā€“ 85 percent of whom attribute this increase to bad actors using generative AI. Nearly half of these experts believe AI will increase their organizationā€™s vulnerability to attacks.

There are many other indicators that AI cyberthreats are on the rise, such as discussions on the dark web about using LLMs to launch social engineering attacks and the deployment of AI to craft more effective phishing messages at scale. It makes sense that the risk of undetectable phishing attacks is one of the top three concerns cited by cybersecurity experts ā€“ IBM reports that phishing is the most common initial attack vector and one of the most financially harmful, costing an average of $4.76 million per breach. Even pre-ChatGPT versions of LLMs have proven to be better than humans at composing phishing messages.

While itā€™s impossible to know exactly what form future AI cyberthreats will take, itā€™s already clear that cybercriminals will use the technology to ramp up social engineering attacks. This is why CSAT needs to be a core strategic focus as the AI revolution gains momentum.

Hackers use AI for psychological manipulation

The latest Verizon Data Breach Investigations Report found that nearly three-quarters of breaches involve a human element. There are several psychological vulnerabilities that social engineering attacks exploit: fear, obedience, greed, opportunity, sociableness, urgency, and curiosity. Cybercriminals are acutely aware of how these vulnerabilities can be leveraged to deceive employees and manipulate their behavior, and AI has the potential to make their psychological tactics even more effective.

One of the most potent weapons in cybercriminalsā€™ psychological arsenal is fear. For example, many phishing scams attempt to frighten victims by threatening them with severe consequences if they fail to follow orders. Cybercriminals say they will leak sensitive information, disrupt a companyā€™s operations, or launch wider attacks if an employee fails to provide account credentials, transfer funds, or take some other illicit action. But cybercriminals also take advantage of employeesā€™ fear through deception. This is why so many phishing attacks involve the impersonation of IRS agents, law enforcement officials, and other authority figures (fear and obedience are often in play at the same time).

Beyond the fact that AI can make phony ā€œofficialā€ communications sound more believable and intimidating, deepfake technology enables cybercriminals to adopt even more devious ploys ā€“ such as impersonating loved ones or trusted contacts to convince victims that fake emergencies are taking place. Imagine receiving a hyper-realistic call from a family member in distress or a colleague who needs immediate assistance. Employees are already susceptible to psychological manipulation, and AI will make this problem even worse.

Building human intelligence to meet the threat

As cybercriminals continue using AI to make social engineering attacks harder to spot, it has never been more important for companies to adopt a robust cybersecurity awareness training platform that builds the human intelligence they need to stay safe. A key advantage of CSAT is adaptability ā€“ employees can be trained to detect new cyberthreats like LLM-driven phishing attacks as they emerge. CSAT can also be personalized to account for each employeeā€™s specific psychological risk factors, behavioral patterns, and learning styles.

A recent McKinsey survey found that just 38 percent of companies are mitigating the cybersecurity risks posed by AI. This is an alarmingly low proportion, and companies shouldnā€™t wait to suffer a crippling breach before taking AI cyberthreats seriously. Now is a particularly good time to consider CSAT, as employees recognize that workplace development and education will be essential in the AI era.

Just as employees must adapt to the changing economy, CISOs and other company leaders have to adjust their cybersecurity strategies in response to an ever-shifting cyberthreat landscape. Cybercriminals will never stop inventing new ways to harness technology like AI to manipulate employees and infiltrate companies ā€“ from AI password cracking (stolen credentials are second only to phishing in attack vector frequency) to AI-enabled social engineering schemes. Your CSAT program has to evolve to stay one step ahead of these attacks.

This means ensuring that CSAT content is personalized, engaging, and relevant; identifying employeesā€™ biggest psychological vulnerabilities; and prioritizing accountability with phishing tests and other assessments. When companies build CSAT programs with these features, they will be equipped to defend themselves in the AI era.

Ad

No posts to display