The Growing Threat of AI-Powered Social Engineering Scams

By Tamas Kadar, CEO, SEON [ Join Cybersecurity Insiders ]
Futuristic humanoid robot with AI interface technology

A seasoned CISO receives a routine email from IT about a suspicious login. It bears the company’s logo, references an internal ticket and strikes the right tone of casual urgency. At first glance, it looks legitimate. But it’s not.

The message was generated by AI, trained on public content and engineered to mimic internal communications. It’s a highly personalized, eerily convincing and dangerously compelling scam. While fictional, this scenario mirrors real-world attacks, now happening far more often than many realize. In 2025 alone, deepfake-enabled fraud attempts have surged by over 2,100% since 2022. AI is now embedded in everything from customer service to onboarding flows. However, the same technology that drives emerging innovations is now being weaponized for deception.

For fraud leaders, CISOs and compliance officers, this isn’t just a security issue but a growing operational and reputational risk. In one striking case last year, a multinational bank reportedly lost $25 million to a deepfake video call impersonating its CFO — proof that these threats are unfolding in real time. This incident is hardly unique. The FBI now sees phishing and spoofing comprise nearly one in four cybercrime complaints in the U.S. At the same time, annual social engineering and business email compromise losses topped $2.9 billion in 2024 — contributing to a nationwide fraud total over $12.5 billion that year. In the wrong hands, AI becomes the ultimate con artist, scaling psychological manipulation through deepfakes, impersonation and synthetic identities with precision that feels disturbingly human.

Behind the Curtain: What Makes AI-Powered Social Engineering So Dangerous?

Social engineering exploits the softest target in any security stack: the human element. Traditionally, this form of attack relies on psychological tactics to trick individuals into revealing sensitive data or granting unauthorized access. But with AI, these manipulations have become faster, more believable, and more dangerous.

Today’s fraudsters aren’t sending typo-ridden phishing emails. They’re using AI chatbots that mimic internal tone, generating deepfake audio of executives and creating synthetic identities with credible digital histories. Recent research found that genAI can craft highly persuasive phishing emails in less than five minutes, a task that would have taken attackers hours or even days only a year ago. These scams are increasingly complex to detect and evolve in real time. Bots scrape public data, mimic behavior and adapt quickly, making even savvy professionals vulnerable.

The impact is clear: the FBI reports over $2.9 billion in annual losses from business email compromise scams. Deepfake-enabled fraud is climbing, powered by accessible AI tools and stolen data. Legacy systems, built for static threats, are struggling to keep up. The line between reality and imitation is blurry, and in fraud prevention, that ambiguity is a risk no business can afford.

Why Complacency Is Complicity

It’s tempting to think of AI-powered scams as edge cases, but attacks are exponentially growing in number and evolving faster than most organizations can adapt.

The consequences for individuals are deeply personal: financial loss, emotional fallout and lasting distrust. For businesses, the stakes are broader: customer confidence, brand reputation, internal morale and regulatory exposure. When fraud succeeds, it signals a protection failure for internal teams and the market.

Many of these attacks now bypass legacy fraud systems entirely. Static rules and rigid compliance workflows weren’t built to counter adaptive, AI-powered threats, and bad actors are aware. But outdated technology is only half the issue. The greater risk lies in assuming yesterday’s safeguards are still enough. In an increasingly AI-driven economy, that kind of complacency isn’t just risky but downright irresponsible.

Defensive Intelligence: Tools to Outthink the Threat

As AI reshapes the tactics of deception, it must also redefine the architecture of defense. Today’s threat landscape demands tools that anticipate, adapt and learn in real time. Innovative fraud prevention doesn’t mean catching every single scam. It means seeing them early, minimizing damage and staying one step ahead of adversaries who never stop adapting.

Digital footprint analysis examines the broader web of traces users leave behind online (email, phone, IP and domain patterns) to assess identity credibility and flag fraud risks missed by traditional checks. By pairing these signals with behavioral indicators (like unusual login times or location mismatches) and device intelligence (such as signs of spoofed devices or mismatched IDs), organizations gain a fuller picture of who’s really behind the screen. A multi-layered approach helps detect manipulation early, often at the first sign a user behaves out of character.

Most importantly, AI can and must be used as a force for good. Machine learning models can process vast amounts of data to surface subtle, often invisible patterns, flagging suspicious activity and enabling real-time risk scoring. Crucially, these systems can evolve alongside the threats they’re built to counter. In a world where fraud moves at machine speed, only machine intelligence can keep pace and turn the tide.

Tomorrow’s Threat, Today’s Responsibility

Fraud doesn’t wait for regulation or roadmaps: as AI advances, so will its misuse. We’re already seeing scams that blend voice, video and text deepfakes, and fraud rings using real-time manipulation to bypass identity checks.

Defenses must keep pace. That means embracing real-time identity verification, dynamic risk scoring and building environments where trust is continuously earned and not assumed. This, however, isn’t a problem for IT teams alone. It’s a strategic priority that spans compliance, operations and leadership. Businesses, regulators and technology providers all share the responsibility of staying ahead.

Full Circle: From Predation to Protection

AI’s potential isn’t inherently good or bad. It reflects how we choose to use it. When machines learn to manipulate, everyone becomes a target, regardless of vigilance, expertise or role. But when they’re trained to detect, defend and adapt, we can tip the balance in our favor. As the line between human and machine behavior blurs, preserving trust becomes both a technical and moral imperative.

The future of fraud prevention won’t be defined by the strongest firewalls but by the smartest strategies that harness intelligence, foster collaboration and anticipate what’s next. The question is no longer whether AI will shape the future of trust. It will. A far more urgent consideration remains: who will shape that AI, and to what end?

Join our LinkedIn group Information Security Community!

No posts to display