The Business Rewards and Identity Risks of Agentic AI - SPONSOR CONTENT FROM CYBERARK


Identity security is the discipline concerned with reducing all aspects of identity-related risk, which requires identifying, governing, and protecting all identities within an organization. The discipline is growing in complexity.
In the past, security teams focused on human identities and ensuring they had the right level of access to the resources they needed to do their jobs. In recent years, this focus has expanded to securing machine identities to protect secrets, certificates, and workloads.
The latest identity complexity is agentic AI. Now that businesses are rolling out AI agents, the challenge is securing an identity that inherits the security challenges of both humans and machines.
Are AI agents a new identity class?
AI agents are machines by definition, but their abilities to make decisions and to learn are more similar to human capabilities. Agentic AI uses advanced algorithms and machine learning to perform tasks and make decisions on behalf of people.
Agents in complex agentic AI systems can perceive their environment, process information, make decisions, and even learn and improve over time. That makes these agents more than machine identities. They can also work independently with minimal human prompts and oversight.
Challenges of the New Identity Class
Scale and oversight are significant challenges with AI identities, just as they have been with machine identities. Traditional machine identities now outnumber human identities 82:1, and by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to Gartner.
Organizations must onboard these identities, give them appropriate access, manage them, and eventually deprovision them. Taking those steps would be challenging enough with human or machine identities. Introducing AI identities adds much more complexity.
AI agents need access to enterprise resources. But how do you manage this access and determine what level of privilege these agents require? And how do you manage these access challenges without adding human intervention to your teams’ workload—potentially negating the productivity you’re trying to gain by using AI agents?
Some organizations that are eager to introduce AI agents may grant broad permissions to speed up implementation. But if the agent is compromised as a result, it could do a lot of damage.
AI agents also lack security awareness, and they do not understand right and wrong. They can be programmed to detect anomalies and flag unusual behavior—which is immensely helpful in preventing fraud. However, they can also be susceptible to vulnerabilities of their own without the human ability to perceive suspicious scenarios (ones that have not been anticipated).
This is also true with machine identities, but because businesses are asking AI agents to take on functions of humans and machines, these agents may need more access to sensitive systems and resources and put your organization at more risk if compromised.
AI agent use is new and unregulated, which always introduces risk. Because anyone can build an AI agent, many agents lack appropriate security controls.
And as with other types of devices and software, shadow AI can be a big challenge, as employees may use AI agents without informing IT or ensuring that these agents are safe to introduce to the organization’s environment.
Unless there is a secure way to approve and onboard all AI agents, an organization that introduces these agents may be blind to the risks it’s introducing.
These are just a few security risks of deploying AI agents. But you can see that the uniting theme is the need for a security strategy that treats AI identities with the same rigor as it uses for humans and machines.
Evolving an Identity Security Strategy
The principles of ensuring secure identities that apply to humans and machines need to extend to AI identities. A security framework that works for all identities must include full visibility into the list of AI identities and their activities, strong authentication mechanisms, and a way to enforce least-privilege access and just-in-time access controls.
The emerging model context protocol (MCP) provides a great starting framework for agent communications. However, the protocol is not secure by default. Organizations need to implement security policies for AI agents, just as they do for humans and machines.
Start by evaluating your organization’s current approach to identity security. Can you adapt it to meet the needs of AI, such as providing support for a huge increase in identities as well as full visibility into and privilege controls over them?
Having these conversations now will prepare your business for the AI agents that are certainly coming your way.
CyberArk is the global leader in identity security, trusted by organizations around the world to secure human and machine identities in the modern enterprise. CyberArk’s AI-powered Identity Security Platform applies intelligent privilege controls to every identity with continuous threat prevention, detection, and response across the identity life cycle.
Learn more about identity security strategy for agentic AI.
Harvardbusiness