
Artificial intelligence has evolved from a revolutionary breakthrough to a powerful tool in the possession of cybercriminals. What was previously seen as futuristic speculation is now materializing in real time: attackers are using AI to create more realistic phishing emails, customize social engineering campaigns with uncanny accuracy, and produce deepfake audio and video messages that can fool even the most astute security analyst.
The Emergence of AI-Augmented Social Engineering
Social engineering has been the backbone of cybercrime for a long time. Historically, phishing emails and basic impersonation techniques have sufficed. AI has changed the game, equipped with bots that can create convincing personas with fabricated histories and credentials. AI can draft phishing messages with perfect grammar and specific to regional dialects, use open-source intelligence (OSINT) to harvest personal and business information, and manipulate targets with dynamic, context-aware conversations mimicking real life communications. This level of personalization has produced hyper-realistic social engineering.
Over the course of ongoing experiments since 2023 to evaluate AI’s effectiveness, AI phishing has shown remarkable progress, becoming 24% more effective than human-crafted phishes this year, compared to being 31% less effective just two years ago. This notable reversal signals AI’s superiority in social engineering, marking a pivotal moment in the arms race between attackers and defenders.
AI-Driven Phishing Campaigns
AI has removed the element of guesswork from phishing and substituted it with calculated accuracy. Phishing these days is no longer about deceiving everyone; it’s about deceiving the right person, at the right time, with the right message least prone to suspect. With AI loaded in the criminal toolbox, attackers now wield phishing campaigns that are eerily realistic.
In addition, contemporary phishing platforms automate the process end-to-end by crafting personalized emails that resemble authentic communication patterns, creating fraudulent websites that copy trusted sites perfectly, controlling interactions with victims via AI-based chatbots that engage in human-like conversations. This new sophistication makes phishing highly personalized. And since AI systems can continuously learn from interaction patterns, attackers can optimize techniques with hair-raising speed. As hackers witness the efficiencies and outcomes of AI-powered attacks, this is bound to convert more phishing services to AI-based, rendering conventional security filter rules and keyword lists progressively redundant.
Deepfakes: The New Face of Deception
The most frightening trend may be the emergence of AI-created deepfake audio and video. With just a few seconds of someone’s recorded voice and still image, attackers can create convincing multimedia content that impersonates executives, co-workers, or family members. Today, deepfakes are being employed to produce voicemails that sound like your CEO, join live Zoom meetings, where AI places a cloned face over a person in real-time, and produce brief video clips in chat apps, intended to prompt instant action. The technology is available to anyone and is anywhere between dirt cheap and free of charge. The implications for fraud, impersonation, and disinformation are staggering.
AI-to-AI Attacks and Disinformation
Algorithms are fighting the battle for influence. Attackers send out armies of generative AI robots to flood digital platforms with precisely tailored disinformation, such as fake news stories, manipulated social media content, and AI-created video footage, deliberately steering public perception on a mass scale. They are anything but naive propaganda. These campaigns are algorithmically designed to push emotional hot buttons, political biases, and cultural fault lines.
Defenders strike back with fact-checking algorithms and network-analysis software, but adversarial AI adapts in real time, contaminating training data and conducting subtle propaganda campaigns. This game of cat and mouse dissolves the distinction between fact and fiction, making every piece of content a potent weapon and making it more challenging than ever to separate reality from deception.
The Evolving Cybersecurity Environment
The advent of AI-powered attacks is an intense threat to cybersecurity. Human instincts remain essential, yet they have to be complemented by AI-based defenses that can keep pace with the tempo of innovation by adversaries. The fight is no longer human vs. human but AI vs. AI. The evolving landscape of AI-driven threats requires an all-around response, including:
• Anomaly behavior examination: Utilizing AI-driven network monitoring to detect subtle activity changes by users, sending alerts for anomalies signaling intrusions or insider attacks in real-time.
• Adaptive phishing protection: Deploying machine-learning filters that get increasingly better, tracking signs of new attacks and blocking them from getting to the user or exploiting vulnerabilities.
• Continuous awareness training: Empowering employees by simulating real-life scams and hyper-personalized phishing attacks. Proactive in nature, this piques awareness and enables effective human risk management, viewing users as empowered guardians who can identify and fend off cyber deceit.
• Real-time deepfake analysis: Embedding cutting-edge technology that can scan facial micro-expressions, voice anomalies, and digital metadata to detect false content in real-time, maintaining trust on phone calls, meetings, and media sharing in threat-prone environments.
AI-powered attacks are no longer an abstract possibility; they have surged to the forefront of modern cybercrime today. Sophisticated attackers now use hyper-realistic social engineering strategies, real-time deepfakes, and AI-vs.-AI adversarial operations to evade traditional defenses. Organizations must retaliate in kind by embedding sophisticated AI tools, revising policies, and bolstering human alertness. The era of AI-driven cybercrime is here, and the only way to stay ahead is by implementing a mix of relevant human and AI defenses.
___
About the Author
Perry Carpenter is Chief Human Risk Management Strategist at KnowBe4, the world-renowned cybersecurity platform that comprehensively addresses human risk management. His latest book, “FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions” [Wiley: Oct 2024], explores AI’s role in deception. With over two decades in cybersecurity focusing on how cybercriminals exploit human behavior, Perry hosts the award-winning podcasts 8th Layer Insights and Digital Folklore.
X: @PerryCarpenter
LinkedIn: https://www.linkedin.com/in/perrycarpenter/
Join our LinkedIn group Information Security Community!
















