Will AI augment or annex cybersecurity jobs?

By Jamal Elmellas
1557

[By Jamal Elmellas, Chief Operating Officer, Focus-on-Security]

Generative AI is expected to impact 60% of jobs in advanced economies like the UK according to the International Monetary Fund (IMF), half of which will gain from enhanced productivity while the other half will take over tasks previously performed by humans, lowering labour demands, wages and hiring. It’s also proving to be a catalyst for transformation with the 2023 Global Trends in AI report finding that 69% of organisations have at least one AI project underway. However, it’s also hugely disruptive and likely to cause changes to the business on a human level too.

Just about everybody with a background in IT has experimented with one of the language learning models (LLMs) such as ChatGPT, Google PaLM and Gemini, Meta’s LLaMA. However, only 28% use it in a work capacity today, finds the Generative AI Snapshot series from Salesforce, although a further 32% plan to so in the near future. There’s a great deal of excitement over the capabilities of the technology when it comes to utilising data to augment communication in IT, sales and marketing roles but how might the technology impact cybersecurity?

Where will AI help?

According to the AI Cyber 2024: Is the Cybersecurity Profession Ready? study by ISC2, AI is most likely to take over the analysis of user behaviour patterns (81%), the automation of repetitive tasks (75%), the monitoring of network traffic and malware (71%), the prediction of areas of weakness (62%) and to detect and block threats (62%). It’s therefore going to be applied to the most time consuming and mundane elements and while it may annex these particular tasks this promises to free up skilled personnel to use their human intuition on more demanding and rewarding activities.

In fact, while 56% believe AI will make parts of their jobs obsolete, this is not seen as a negative, with 82% believing it will improve job efficiency. This, in turn, could help to alleviate the workforce gap which the same industry body estimates currently stands at almost 4m. That deficit in the workforce is placing cybersecurity professionals under tremendous strain, decreasing their ability to perform critical, careful risk assessment and remain agile. The ISC2 Cybersecurity Workforce Study 2023 found 50% complained of not having enough time to conduct proper risk assessments and carry out risk management, 45% claimed it lead to oversights in process and procedure, 38% misconfigured systems, and 38% tardy patching of critical systems, due to skills shortages.

However, while GenAI has the power to alleviate these stresses and strains, the Snapshot found 73% of the general workforce believe GenAI will also introduce new security risks inhouse, from threats to data integrity, to a lack of employee skills in this area, to the inability of GenAI to integrate with the existing tech stack, and the lack of AI data strategies.  This demonstrates there is a clear need for better governance and guard rails, with the ISC2 survey also unearthing concerns over the lack of regulation, its ethical application, privacy and the risk of data poisoning.

Only just over a quarter (27%) of those in the ISC2 AI survey said their organisation had a formal policy in place to govern AI use and only 15% a policy to cover securing and deploying the technology. This potentially represents an interesting opportunity for the sector as security teams could take the lead in deployments. We’ve already seen a host of regulatory guidelines issued that could help assist in this respect, such as ISO/IEC 22989:2022, ISO/IEC 23053:2022, ISO/IEC 23984:2023, and ISO/IEC 42001:2023 as well as NIST’s AI Risk Management Framework.

It’s also worth mentioning here that AI is likely to see an escalation in the sophistication, veracity and volume of attacks. Over half (54%) of those questioned for the ISC2 AI report said they’d seen an increase in cyber attacks over the past six months and 13% said they were able to detect these were AI-generated, indicating that worst fears are being realised. Given the continual arms race between attacker and defender, this lends some urgency to the proceedings.

With regards to timescales, the ISC2 AI study found 88% believe AI will significantly impact their job in the next two years. Yet, as of today, 41% said they have minimal or no expertise in securing AI and machine learning technology which could spell a steep learning curve.

To help move adoption forward, security teams therefore need to conduct a skills gap analysis and focus on upskilling in the area of AI and machine learning technologies. Once equipped with this understanding, cybersecurity professionals can provide the security piece in working parties charged with implementing the technologies, helping to caution the organisation against threats and update acceptable use policies on ethical use.

As to whether AI will augment or annex job roles, the IMF claims that it’s only in the most extreme cases that AI is expected to see jobs disappear. What is certain is that it will see the emergence of new ways of working, threats and opportunities, making it imperative that we get to grip with the technology today. Ignoring it, which 17% admitted to doing in the ISC2 AI report, banning it (12%) or not knowing what the organisation is doing (10%) is not an option. AI is here to stay, making this an adapt or die moment for the business.

Ad

No posts to display