
AI agents are already inside your enterprise. The question is: who’s governing them? The emergence of agentic AI and Model Context Protocol (MCP) frameworks promises a new era of automation and interoperability. These systems enable AI to access services, execute tasks and interact across platforms, thereby blurring the boundaries between human and non-human digital actors.
Gartner predicts that by 2028, one-third of Generative AI (GenAI) interactions will involve autonomous agents that act, decide and execute independently. The analyst firm noted that agentic AI isn’t just another GenAI feature — it’s a fundamental shift in how software behaves.
MCP is accelerating this scenario by enabling agents to invoke tools and services across platforms. There are a lot of benefits to these new ways of doing work, but the problem is that organizations are deploying these capabilities faster than they’re governing them. Companies must understand agentic AI’s potential enterprise use cases and the urgent need for proactive governance frameworks to prevent risk from outpacing innovation.
The governance gap: Why current approaches fail
The explosion of non-human identities (NHIs) is already ungoverned. Autonomous agents act independently, and the MCP frameworks build on this; they enable cross-platform automation, creating interoperability among services and large language models (LLMs).
These systems allow AI to call services, execute tasks and interact across platforms, thus blurring the boundaries between human and non-human digital actors. While the technology offers exciting efficiency gains, it also introduces governance, security and confidentiality risks that are not yet fully understood.
AI agents create a state of authorization without oversight. Traditional access controls assume there is a human in the loop making access decisions. Agentic AI inverts this: the agent decides what it needs, then requests (or assumes) access.
MCP frameworks compound this by enabling cross-platform tool invocation with loosely defined permission boundaries. The result: agents accumulate effective permissions far exceeding their intended scope.
Creating a Governance Framework for AI
Traditional identity governance and administration (IGA) was built for humans, not agents. AI agents are part of the growing universe of non-human identities, a category that also includes bots, API keys, service accounts and other credentials that allow machines or software to authenticate, access resources and communicate within a system.
These agents are performing tasks like scheduling meetings, updating sales pipelines, analyzing code repositories and retrieving sensitive documents. Essentially, they’re performing all sorts of tasks that a human would. However, unlike humans, these NHIs aren’t always governed in the same way. And they are rapidly proliferating.
Traditional identity governance wasn’t built for this scale or level of autonomy. Agentic systems can introduce structural weaknesses; there are often loose permissions between models and tools. AI may invoke services without granular oversight. And to top it off, many organizations already still lack the foundational IGA needed to govern human identities, let alone NHIs.
Creating Governance for AI approach
First and foremost, organizations need to take a careful look at their foundational identity governance before scaling AI. It’s unwise to chase hype without a solid IGA base. Instead, treat AI agents as first-class identities. That includes provisioning, entitlements and deprovisioning.
Next, include these steps for proper governance:
- Apply lifecycle management and monitoring
- Enforce least-privilege for all AI-driven workloads
- Narrow access scopes and permissions
- Implement guardrails for MCP interoperability
- Restrict and authorize which services AI agents can call
- Monitor cross-system interactions for anomalies
Identity management in the age of AI
It’s clear that AI agents have a great deal to offer organizations with scale, speed and innovation being among the benefits. However, these pluses come with the minus of complex and unsecured identity management. Resolving this dilemma requires more than credentials and basic OAuth flows. Organizations need a governance model with the capacity to orchestrate dynamic policies, automate approval processes and manage the whole AI agent identity lifecycle just as thoroughly as human identities.
Companies that adopt a modern IGA approach are better positioned to be able to responsibly integrate with LLMs and build agentic systems. Instead of focusing on whether they can identify and authenticate an agent, they must think in subtler, big-picture terms. Organizations must ask themselves how they can manage the increasingly complex environment of AI agent identities and permissions in a way that is agile and also secure.
Organizational security depends on the ability to manage all of today’s identities. It’s not extreme to call this a foundational security issue. Review the recommendations discussed above to assess current practices and make all necessary adjustments so that the organization benefits from agentic AI without risking security gaps.
Join our LinkedIn group Information Security Community!
















