AI CyberSecurity Risks: Equip Your Employees To Think Like a Hacker

By Eric Jacksch, CPP, CISM, CISSP, ELB Learning Cybersecurity Consultant

The rapid expansion of AI has graced us with what seems like the gift that keeps on giving. We’ve been able to turn our words into works of art, effortlessly produce content, and automate mundane tasks.

We also learned that some things are too good to be true. According to a report by CyberCatch, five key risks stem from AI: shadow, security, bias, inaccuracy, and hallucination. Of these, security is the most significant because of the potential for cascading consequences. A small security issue, or a collection of small issues, can quickly escalate into a major security or privacy breach. Just as legitimate businesses are seeking to take advantage of the benefits of AI, criminals will leverage the surge in interest by compromising AI-related websites, creating fraudulent sites, malware, and more.

Employees must understand that while AI might seem like magic, from a security and privacy perspective, it’s just another way of processing data. And data – especially private data – is extremely attractive to hackers.

Let’s take a closer look at a few of the common cybersecurity risks related to AI your employees may not be aware of and ways you can arm them to thwart cyberattacks.

Data and Privacy Concerns

As individuals leverage AI into their daily workflow, they often share more information than they realize. Adding AI plug-ins to browsers and other applications increases the potential for data exposure and risks to intellectual property rights. Employees may not be aware of how much information is being sent and to where.

Training a machine learning model requires a large amount of data, and the applicable terms of use may allow AI companies to leverage information provided by users to update or train new models. This, in turn, could result in confidential personal or business information being retained much longer than expected.

In addition, the growing popularity of open-source AI projects and APIs makes it increasingly easier for criminals to build their own AI website or application, and harvest all of the information sent to it.

These scenarios involve exposing company data to third parties and we’re already seeing some of the repercussions. The good news is that they look a lot like the privacy and data risks we’re already used to (such as using third-party file-sharing services) and the same policy and governance approaches are applicable.

Recently, OpenAI disabled ChatGPT temporarily after discovering a bug in an open-source library used by the chatbot. The bug allowed some users to see content from other active users’ chat history, along with exposing some ChatGPT Plus subscribers’ payment information. Furthermore, a recent report from Group-IB shared that over 101,000 compromised ChatGPT login credentials were on sale on dark web marketplaces.

Targets and Tools

Malware and ransomware have been a looming threat to IT systems for years, and now AI platforms are both a target and a tool for criminals.

AI can be used to automate and improve different attacks such as phishing and malware distribution. Before, many fraudulent emails were easy to spot due to grammatical, spelling, and stylistic mistakes. Now, AI can be leveraged to create fake websites, emails, social media posts, and more to lure users into providing confidential information (including login credentials) or downloading hostile content. Generative AI helps hackers make their attacks much more believable by offering flawless language, context, and personalization, thereby removing many telltale signs of phishing.

The rise of “deepfakes” – digitally manipulated media to replace one’s likeness convincingly with that of another – is a major concern as misinformation and identity theft rise. Hackers can manipulate a voice, video, or image, in hopes of catching users in their traps. This will be used to impersonate coworkers, obtain confidential information, and request password resets. And, deepfakes can also be used in a much darker way for blackmail.

AI advancements make it harder to separate the real from the fake and also help criminals scale. The believable impersonation increases their chances of exploiting security vulnerabilities, especially in areas such as phishing, malware, and social engineering in general.

Training Your Employees to Think Like a Hacker

Your employees are the first line of defense when it comes to cyberattacks. Like with any other technology, policies, guidelines, and training need to be regularly updated and aligned with employee roles.

Employees need cybersecurity awareness training that educates them on how to recognize and react to threats. This should include a mix of online training and in-person or video sessions with a cybersecurity expert to build rapport and allow questions to be asked live. Building continuous awareness through email, Slack, or Teams updates keeps employees informed on ongoing and evolving cybersecurity concerns.

One of the most effective ways to keep your employees sharp on cybersecurity threats is to train them to think like a hacker. Immersive training technologies have allowed us to better manage cybersecurity risk by helping employees recognize and report suspicious situations by putting them directly into the experience itself.

HackOps, created by CyberCatch and ELB Learning, is an immersive cybersecurity risk mitigation solution. The gamified, virtual-reality course emulates the behavior of real hackers and common cyber attacks.

Employees assume the identity of one of the “bad guys.” They learn tactics, techniques, and procedures to break through network firewalls, steal or alter data, and install malware and ransomware.

Passively reading through documents on cybersecurity isn’t as effective. According to Ebbinghaus’ Forgetting Curve, you forget 50% of all information learned in a day and 90% of all information learned in a week when it’s not put to use. When learners are tasked with crafting phishing email campaigns or installing malware to steal data, they become able to safely protect information themselves.

AI isn’t going away – it’s only going to become more powerful and prevalent. And, along with it comes an increase in security and privacy risks. Fostering a culture of security when using AI tools must be a priority today. Get your employees thinking like a hacker so they can spot and report threats to your business before it’s too late.

 

Ad

No posts to display