Embracing the benefits of LLM securely

By Gilad Elyashar, Chief Product Officer, Aqua Security [ Join Cybersecurity Insiders ]

AI is evolving at a rapid pace, and the uptake of Generative AI (GenAI) is revolutionising the way humans interact and leverage this technology. GenAI is based on large language models (LLMs) that have proven remarkable capabilities for breaking down barriers between humans and machines – from generating human-like text to powering conversational interfaces and automating complex tasks.

Even though we are at the early stages of LLM adoption, businesses are preparing the way to build LLM-powered applications. Initial findings from our customers reveal that one out of four customers are building LLM-powered applications and around 20% of them are using OpenAI as their LLM. And according to a developer survey by Stack Overflow 70% of developers are using or are planning to use AI tools in their development process.

However, while businesses are strongly driven to embrace LLM adoption, in many cases a fear or lack of knowledge relating to the evolving attack vectors that come with it and AI-powered threats will be slowing down innovation.

The Open Worldwide Application Security Project (OSWAP) Top 10 list for LLM applications has driven further awareness around the risks from LLM adoption by highlighting the critical need for security tools and processes to confidently manage the rollout of GenAI technology. Three key areas of focus within the OWASP Top 10 for LLMs include Prompt Injection, Insecure LLM Interaction, and Data Access.

But how do these specifically affect cloud native applications, and what is important to know about these attack vector techniques?

Top three LLM risks identified by the OWASP framework

  1. Prompt Injection – a new but serious attack technique specific to LLMs. Here the attacker crafts inputs designed to mislead or manipulate the model, with the intention to generate unintended or harmful responses. The model relies on input prompts to generate outputs and allows attackers to inject malicious instructions or context in line with these prompts. Prompt injection, if not identified, can lead to unauthorised actions or data breaches, compromising system security and integrity.
  2. Insecure LLM Interaction – LLMs interact with other systems, increasing the risk that their outputs can be leveraged for malicious activities, such as executing unauthorised code or initiating cybersecurity attacks. These threats pose significant risks to data leaks, and identity theft and compromise both security and data integrity.
  3. Data access – LLMs store all the information they consume, heightening the level of data leakage risk when sensitive information is unintentionally exposed or accessed by an unauthorised person through the model’s output. The risk associated with improper data access controls is significant as it can lead to unauthorised data exposure, or breaches jeopardising both privacy and security. Proper controls are essential to mitigate this risk and ensure sensitive information stored within an LLM is processed and stored securely.

Businesses must be able to confidently navigate the complexities of LLM-based application development and deployment, ensuring compliance with regulatory standards and safeguarding against malicious exploits.

Here are the three key steps organisations must take to secure LLM applications from code to cloud:

1. Discovery phase

It is important to remember that as GenAI brings more simplicity for setting up applications, cybercriminals are seeking the same benefits. For example, AI agents can easily and quickly optimise productivity and speed into operations, but this evolution must be coupled with a robust security strategy for managing and monitoring agent-based systems.

It starts by asking some crucial questions, about who and how GenAI is being used across the organisation and for what LLM applications. A thorough assessment is needed here, that identifies the various LLM applications or planned applications and how they interact with the full lifecycle. From code to cloud. The process involves identifying which microservices in the application have used or are backed by LLM-generated code and assessing the most common vulnerabilities associated by the nature of the application.

Understanding the different kinds of threats and integrating them with a business strategy will make sure LLM applications securely empower rather than hinder the business.

2. Protecting vulnerabilities and threats – in code, misconfiguration or runtime

Then it is about protecting the application that uses AI across the entire cloud application lifecycle. It is essential to employ advanced code scanning technology to identify and mitigate the unsafe use of LLM in application code, including unauthorised data access, misconfigurations, and vulnerabilities specific to LLM-powered applications.

By actively monitoring the workloads of LLM-powered applications, organisations can prevent unauthorised actions that LLMs might attempt, such as executing malicious code due to prompt injection attacks.

3. Implementing guardrails 

Employing specific GenAI assurance policies serve as guardrails for developers of LLM-powered applications. These policies will prevent unsafe usage of LLMs when based on practices from the OWASP Top 10 for LLMs.

With GenAI assurance policies enforced, alongside holistic protection across the entire cloud native application lifecycle, businesses and industries can truly embrace the transformative potential of GenAI. New standards and comprehensive protection for LLM-powered applications from code to cloud bridges the gap between security requirements and development processes. Thus, allowing organisations to fully embrace innovation while mitigating potential risks.


No posts to display