HEALTHTECH: How is AI, especially agentic AI, complicating identity management for organizations?
IYER: Agentic AI, as you can imagine, introduces this notion of autonomous bots and autonomous workloads, the idea being that workloads are spanned automatically by these bots. This makes traditional IAM strategies not really work, because most of these bots are essentially provisioned in such a way that they are autonomous.
They also introduce this notion of behaviors that are driven by what they intend to do, sometimes driven by actions that are very well known and sometimes by actions that are not known at all. Sometimes, it’s completely unpredictable. That introduces challenges around IAM strategies. The traditional identity security controls that you typically use to govern and manage how access is driven doesn’t really work for these autonomous agents. That’s what agentic AI is doing.
HEALTHTECH: What other threats are making identity management more difficult and are important for organizations to be paying attention to?
IYER: When you talk about what other things drive and make identity more challenging in the enterprise, I think of credential phishing or credential management, stolen credentials, phishing and elevated privilege controls. All of these are making identity management extremely difficult, specifically with AI in the mix.
What really happens is the AI agents typically inherit the controls and the access management aspects of the identity from where it is getting called. This means that when these agents execute on actions, many times they are actually inheriting privileges. That makes it very difficult, because we are overprivileged, which increases the risk of credentials being stolen.
EXPLORE: Navigate identity and access management in the era of AI.
HEALTHTECH: How can zero-trust principles help organizations protect themselves against these threats?
IYER: The basics of zero trust are “never trust, always verify.” So, there are certain principles people should consider when they start to implement a zero-trust architecture within their environment. For a long time, people have looked at zero trust as a very human-centric thing. When they decide they are going to implement a zero-trust architecture, they’re predominately looking at it from a human perspective: what information staff get access to and how they get access to it, ensuring that the organization verifies every time there is an access request that needs to be managed.
That completely changes when you think about AI, because in the context of AI, even though we say trust, but verify, the verification part becomes challenging because you don’t think about how these agents themselves are manifested within an environment. What access controls do they have? What are they really doing in terms of getting access to data and other tasks?
In a zero-trust architecture, you still need to do all the things that you’re doing for humans, but a machine-centric view is also important. At CyberArk, we’ve examined many of our customers and their data, and we’ve learned that the ratio of machine to human identities is somewhere around 82 to 1. That means for every human identity, we are seeing 80-plus machine identities out there in the organization.
That makes implementing a zero-trust architecture difficult, because you have to think about not only providing access but also ensuring that these agents have a very granular level of control. They should have the right kind of visibility in terms of what they’re getting access to so that we have the right audit trails and audit mechanisms built into our processes. Most important, organizations need to continuously evaluate how and what these machine identities are accessing and ensure that they implement behavior-based identity as opposed to just the static controls that you typically see in an organization.