Deepfakes turn into second most common cybersecurity incident

752

Deepfakes, where AI algorithms manipulate a person’s voice, image, or video to mimic the original, have emerged as the second most common cybersecurity threat in the UK, closely trailing malware.

Surprisingly, an alarming 32% of businesses in Britain have fallen victim to such incidents within the past year, according to a recent online survey conducted by the ISMS web portal.

The survey, which collected responses from over 500 participants across various sectors including technology, manufacturing, education, energy, and healthcare, shed light on the growing prevalence of deepfake attacks.

One particularly concerning trend is the infiltration of deepfakes into the corporate sector through business emails. Hackers are utilizing manipulated voices and video files to deceive C-level executives into authorizing fraudulent money transfers. As a result, calls are mounting for government intervention to mandate cybersecurity awareness training for employees in both public and private companies.

Additionally, adequate budget allocation is deemed essential for safeguarding IT assets and investing in technology capable of detecting and mitigating deepfake threats, thereby affording victims a timely exit strategy.

Despite the looming threat posed by deepfakes, there exist discernible signs to help identify manipulated content:

a. Discrepancies in skin texture and body proportions often expose deepfakes.
b. Anomalies such as unusual blinking patterns and irregular shadows around the eyes indicate potential image or video manipulation.
c. Non-realistic lip movements and excessive glare in eyewear are common red flags.
d. Unnatural hair styling or inconsistencies between hair and facial features may signal tampering.
e. Aberrations in voice tone, fragmented speech, and irregular word breaks hint at AI-generated content.

It’s worth noting that while premium software tools are available to detect deepfake content, they come at a cost. Moreover, governments worldwide are poised to enact legislation requiring companies responsible for generating deepfakes to watermark their creations, spanning images, videos, and audio files. Such measures aim to enhance accountability and combat the proliferation of deceptive digital content.

Ad
Naveen Goud
Naveen Goud is a writer at Cybersecurity Insiders covering topics such as Mergers & Acquisitions, Startups, Cyber Attacks, Cloud Security and Mobile Security

No posts to display