Approaching Agentic AI With a Security-First Mindset

By James Hendergart, Senior Director – Technical Research, F5 [ Join Cybersecurity Insiders ]
default-cybersecurity-insiders-image

Agentic artificial intelligence (AI) will be the next turning point in AI development. Autonomous systems, which carry out tasks and make decisions without human intervention, will accelerate productivity gains and detect insights beyond human ability. With the advent of agentic AI, however, comes the issue of determining an appropriate level of access so it can run as intended without violating security policy.

Defining security context

Security context governs the parameters around privilege and access control, defining who can do what with a given set of data. Permissions to create, change, or delete data across enterprise systems are defined by the business requirements assigned to specific roles. When employees use software to fulfil a certain role, that software must have a security context to operate within; and that security context is implemented using accounts. Within software, user accounts are connected to an employee’s identity, whereas service accounts are created to manipulate corporate data independently from any specific user’s security level.

Determining least-privileged access for various corporate resources is straightforward for users based on their roles, but service accounts, while still constrained, are configured to allow most-privileged access so they can perform all the actions required throughout a process. For example, a user may have ‘edit’ access to a set of records in a system which corresponds to their assigned customer accounts and ‘read only’ access to all other customer accounts. Service accounts, however, need ‘edit’ access to all accounts to carry out their intended function.

Challenges determining security content

When using agentic AI, user activity and corporate business process activity interconnect. Corporate business processes are designed to deliver a result promptly (i.e., for all customers, employees, and instances). To operate, service accounts commonly need higher permissions than most individual users, yet an individual user, or team of users, are responsible for initiating, monitoring, and overseeing its corporate processes.

For example, an organisation employs an agentic AI assistant to support the onboarding of new employees. Each new employee has a specific role and, based on that role, the AI needs to access corporate systems at a certain level. The agentic AI assistant runs in the context of the HR personnel doing the onboarding. So, what happens if the AI agent needs to provide a new employee’s user account with permissions which exceed those of the HR team members?

Security violations can occur if the agent’s permissions are based on the process it carries out, rather than the level of access the employee using it has. If the software controlled by the user provides access to data they do not have the correct level of privilege for, a security breach is inevitable. On the other hand, if the action is blocked due to lack of permissions, then the necessary task cannot be completed.

Preventing security breaches

Determining the correct security context can be solved by branching the process. With employees, a sub-task can be assigned to someone who has the authority to grant permissions at the level required, and once approved, the process can proceed to the next step. This technique of segmenting the actions by authorisation level should also be applied to agentic solutions.

Deploying agentic AI may increase the likelihood that software used to initiate a process in the context of a user ends up gaining elevated privileges when switching to the context of an agent running with a service account. In this case, it is important for the agentic system to operate in such a way that elevated privilege and data access does not violate security policy.

The design process is fundamental

Designing AI agents with proper security context starts with answering this question: is the process being automated meant to be a personal assistant deployed to one or more users or is this a generalised business process which needs to run at scale on behalf of the business?

If it is personal, focus the process map on verifying that all actions and data access requirements comply with existing corporate security policy for the targeted users. If any exceptions are found, separate the actions requiring elevated privilege and confirm that access and actions are properly mapped to individuals with the various levels required.

If the process is generalised for the business, examine the service account design to determine whether most-privileged access creates security gaps such as the HR example , and double check to confirm that sensitive data is not exposed to users who do not have the requisite privileges.

Organisations should  security context in the design of personal and corporate agents. By separating tasks that require elevated privilege and being mindful of whether a user or service account will execute the action, the risk of unintended security gaps will decrease.

Join our LinkedIn group Information Security Community!

No posts to display