Earlier this month, popular Hollywood actor Tom Cruise was trending on social media not because of a new film that he’s working on, but thanks to TikTok videos that went viral, generating many reactions from users around their authenticity.
In one of the clips, Cruise is shown performing a magic trick with a coin, ending with him saying “It’s all the real thing”, and on others he could be seen golfing or tripping over a carpet.
While this gimmick may be amusing at first viewing, the use of deepfakes as means to defraud unsuspecting victims has become an alarming trend that has increased since their emergence in late 2017. In a famous case, one business leader fell victim to a deepfake scam where fraudsters impersonated a trusted business partner, manipulating the CEO into transferring $243,000 to the scammers’ account.
So, the idea of these entertainment videos delivered by a deepfaked Tom Cruise had an additional benefit alongside the levity – of raising awareness around trust and misinformation. What’s more, the creator of the videos said in an interview recently that he created them in order to ‘raise awareness to the continued evolution of the technology that can create incredibly realistic fake videos of people’.
In this blog, we look at the rise of deepfakes and how businesses and consumers alike can protect themselves.
What are deepfakes?
Deepfakes are sophisticated forgery of an image, video or audio recording that could often be difficult to detect. They are made to look and sound authentic by using deep learning technology and AI algorithms.
Deepfakes first came into prominence in 2018 when a developer adapted AI techniques to create software that can swap one person’s face for another. This technology quickly grew in popularity, and now there are a number of applications which allow users to substitute their face for that of a celebrity just by using a single photo.
It’s worth noting that sophisticated deepfaked videos are very difficult to create. For example, the creator of Tom Cruise’s videos had to spend over two months training an AI programme by feeding it with a huge number of images of the actor in order to create a realistically looking replica.
Deepfakes are currently most widespread on social media where users usually quickly glance through an image or a video and often don’t think twice if the visuals they’ve been exposed to are real. Experts say that watching deepfake videos on a small screen like a smartphone or a tablet are more likely to look realistic rather than if you watched them on your TV or a bigger screen.
While deepfakes are still largely restricted to jokes and pranks, fraudsters have begun to wise up and exploit the technology for malicious purposes. For example, synthetic voice impersonation has been used to defraud companies and business leaders, tricking them into giving away sensitive information or transferring money. Fraudsters are also turning to synthetic identities, using deepfaked images and videos, to open fake accounts with financial institutions. According to McKinsey, this type of fraud is already the fastest growing type of financial crime in the U.S.
How to protect your business from deepfake fraud
Due to new levels of sophistication, deepfakes are able to bypass traditional anti-fraud measures as they fail to distinguish between what is real and what isn’t. With these attacks continuing to grow in frequency, it’s more important than ever for businesses to take precautionary measures.
When it comes to financial services, these businesses need to ensure they have strong and secure authentication process in place for customers to avoid becoming a victim. Furthermore, the use of secure onboarding processes with biometric verification such as facial recognition, can act as means to reduce the number of fake accounts opened using synthetic identities.
The liveness detection capabilities of biometrics mean that the system is able to detect if a face or a fingerprint is real or fake by using algorithms that analyse data collected from biometric scanners and readers. The combination of several biometric modalities such as face, voice, iris and even including a form of behavioural biometric, will add an inherent element of identification to the authentication process.
Overall, deepfake fraud can be combatted by harnessing the very AI and deep learning techniques that make the technology so convincing. Algorithms are currently being trained to recognise minor inconsistencies such as unexpected shadows, too much glare on glasses, too much/ too little blinking, etc., and major technology companies such as Facebook and Microsoft have committed to fighting this type of fraud. In fact, Microsoft recently announced it has developed a tool that analyses photos and videos to give a confidence score about whether the content has been artificially created.
Hopefully, while amusing for many, the TikTok videos of a deepfaked Tom Cruise managed to serve the purpose they were created for – raise awareness of synthetic media and the misuse of technology.
What do you think of deepfake technology and its (mis)uses? Let us know in the questions below or by tweeting us @ThalesDigiSec.