November 26, 2025
EP 20 – Why agentic AI is changing the security risk equation
As enterprises embrace agentic AI, a new security risk equation emerges. In this episode of Security Matters, host David Puner sits down with Lavi Lazarovitz, VP of Cyber Research at CyberArk Labs, to unpack how AI agents and identity security are reshaping the threat landscape. Learn why privileged access is now the fault line of enterprise security, how attackers exploit overprivileged AI agents, and what security teams must rethink before scaling AI. Packed with real-world examples and actionable insights, this is a must-listen for anyone meeting the challenges of AI and cybersecurity.
A finance team spins up a new AI agent to handle routine tasks, pulling orders, generating invoices, keeping workflows moving. Nothing dramatic, just automation to save time. On Friday afternoon, a support rep runs a typical query list. My recent orders. The AI agent does exactly that and then quietly does something else hidden inside a shipping address field. A spot no one checks was a malicious instruction slipped in days earlier. The agent reads it, interprets it as part of its task, and triggers another tool. Suddenly it is pulling bank account details, vendor balances, and sensitive financial data. Not because it has gone rogue, but because someone gave it more entitlements than it needed.
No malware, no breach alert. Just an overprivileged AI agent doing its job as instructed. This is the new attack surface that CyberArk Labs VP of Cyber Research, Lavi Lazarovitz, has been studying. AI agents that blur the line between human and machine, carrying credentials, acting autonomously, and responding to prompts, even poisoned ones.
As their access grows, identity becomes the fault line. Everything rests on it. Today on Security Matters, Lavi explains why Agentic AI changes the risk equation, how attackers are already exploiting these systems, and what security teams need to rethink before AI agents scale across the enterprise. Let’s get into it with Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs.
Welcome back to Security Matters, Lavi.
Lavi Lazarovitz: Thank you so much, David. It is a pleasure being here.
David Puner: Excellent. You were last on in March. You are now the first two time Security Matters guest, and of course you were on many times when we were called Trust Issues. So great to see you again. There is always a lot to talk about with you.
Thank you, David. Since you were last on the show back in March, it feels like everyone is talking about AI agents. We certainly are here on the podcast. What is driving the surge in interest?
Lavi Lazarovitz: Many organizations now are trying to unlock the huge potential of those AI agents and the automation they could bring. There are many challenges in doing that. There is a huge surge because lots of technologies are coming up and we are seeing them develop almost daily. I think it creates a lot of attention, eyes, and resources invested into unlocking the value these AI agents bring.
David Puner: Things are moving so fast with AI agents that it is worth starting at the beginning of our conversation with the basics. What is an AI agent and what makes AI agents different from traditional automation or bots?
Lavi Lazarovitz: An AI agent is basically a three model service. The first model is an orchestration model. It defines what the agent needs to do. It defines the task. In most cases it is written in English in a configuration file. The orchestration model also includes a memory buffer and context for the model to use and the purpose of how it handles tasks. This is the first model AI agents have.
The second model is the tools model. The tools model defines how the agent interacts with other resources or other agents. This model is vital because it defines what the agent can and cannot do.
The last model is the AI model itself, the LLM the agent is using. All these models allow an AI agent to perform a task and do it autonomously, relying on the resources and integrations it has to solve a challenge or run an automated task.
David Puner: An AI agent, to be clear, is a new class of identity. It is not a machine. It is obviously not a human. How does it fit into that grouping of three?
Lavi Lazarovitz: You are absolutely right to say that we see it as both a machine identity and a human identity. It behaves a little like a machine identity. It needs credentials around the clock to access resources, the cloud, the database, the code repository, and so on. But it also in many cases mimics user behavior, touching the browser, for example, and running tasks on behalf of the user. So there is a combination of both machine identity and human identity.
David Puner: Going back to modules, what is a real world example where modularity mattered?
Lavi Lazarovitz: I can speak from my perspective. My main focus is security, and we are looking at technologies that help our researchers find more vulnerabilities and attack vectors. One of the AI agent models we found valuable are those focused on finding vulnerabilities and automating that process. Many security researchers will say the day-to-day work is daunting. Looking at hundreds of lines of code, trying to find vulnerabilities and patterns, and building the exploit for it, which 99 percent of the time does not work. You need to keep working on it until it does.
There is a whole new AI agent ecosystem that focuses on optimizing vulnerability research and making it more effective. It helps us optimize and strengthen the product security work we do with CyberArk products. I can admit from my perspective that AI agents focused on security and finding vulnerabilities do work. They still need improvements to run fully autonomously like we imagine, because right now there is still a lot of manual work required to review findings and adjust configurations to target specific attack vectors. But the use case is there, and we see many vendors trying to build on this opportunity.
David Puner: Let’s shift over to security. You head up CyberArk Labs’ threat research team. You have deep knowledge of the threat landscape and its evolution. How do you see the threat landscape changing as AI agents become more autonomous and interconnected?
Lavi Lazarovitz: The equation, as we see it, is clear. The more agents an AI system introduces, the more opportunities threat actors have to compromise it. The more agents and services running with credentials and API keys, with access to resources, the more opportunities threats have to compromise them. The second factor is access and entitlements. The more entitlements and the more use cases an AI agent or AI system runs, the more opportunities threat actors have to compromise it and move laterally in an organization until they reach their target. The equation relies on two factors: the number of opportunities and the scope of access.
David Puner: When we talk about access and AI agents, we are talking about privileged access. They are privileged access identities, just like the identities we knew before 2025. What is the most overlooked entitlement risk you have seen so far?
Lavi Lazarovitz: In many of the use cases we are seeing, security and the level of access are not the first priority at the moment. Many organizations are still exploring use cases and deployments. At the first stage, I think this is true for any new technology. We saw it in the past when we migrated to the cloud and when we automated CI CD pipelines. In development environments, operation and functionality usually come first. Then, when things work, security comes in to optimize permissions and so on.
So to your question, in many of the deployments we are seeing and in many organizations starting to use AI agents, operation is currently the first priority. I think the next step will be identifying all those AI agents in the organization. The step after that will be placing the right security controls to mitigate the attack vectors or attack surface of those AI agent identities.
At least from my perspective, many organizations today are focused on deployment, how to unlock the value of the agents, and finding the right use cases. Some are starting to work on discovering what agents they have and what platforms are used in their environment. I think this will be the first challenge many organizations face.
David Puner: Getting back to real world examples and how AI agents are delivering measurable results for businesses, has anything about these deployments surprised you? Are there any notable examples you can think of?
Lavi Lazarovitz: To be perfectly honest, I have not seen functionality that surprised me yet. I mentioned the security AI agent, the one that automates security tests, and based on our experience it requires a lot of manual work to filter findings and adjust tests so they fit the specific use case we are testing. I do think this will change very soon. The technology is developing very fast, and we should soon see meaningful impact from agents.
Practically speaking, we tested the security agent against our team. Our team ran a security research project for about a month and a half. In the product they tested, they found about four or five high severity findings. The AI agent we used ran for about a week with an operator. As I mentioned, there was a lot of manual work, but with the operator the agent found one high severity finding in that week. So the team worked four or five weeks to find four or five high findings, and the AI agent found one in one week. I think we should soon see this technology emerge and bring meaningful results, but we are not there yet. So I cannot say I was surprised.
I do think that for development and product security there is a lot of value, and again I am speaking from my perspective and hands-on experience with these AI agents running in production.
David Puner: Recent reports say the adoption of agent systems will double in three years, and by 2027 multi-agent environments will be the norm. What does that mean for organizations?
Lavi Lazarovitz: One thing organizations need to prepare for is the likelihood that by 2027 we will see a major breach where an AI agent was the key factor in the attack. Because these agents are emerging quickly and we expect their presence to double in the environment, this is something we should expect.
The second thing is that the more AI agents are deployed across the environment, the more use cases they will support. Those agents will accumulate more entitlements and permissions to access resources. We should expect that accumulation of entitlements. Eventually, AI agents will become a prime target for threat actors, much like we see with machine identities today.
A few months back at CyberArk Impact 25, we talked about the proliferation of machine identities and how they exceeded human identities by more than 80 to one. AI agents will add even more weight to that machine identity perspective, creating more attack surface and more permissions tied to it. We should expect that growth in attack surface and ensure security controls take center stage.
David Puner: There is a lot we can expect, and a lot we know and do not know. Is there anything organizations should be on the lookout for now when it comes to AI agents and risk?
Lavi Lazarovitz: I think the first step in securing AI agent deployments is learning what technology the development team is using. The first thing security teams need to do is know.
David Puner: Mm-hmm.
Lavi Lazarovitz: Before AI agents become production grade, before these generative AI systems go to production, security teams need to create visibility and context for those AI agents. Considering the current stage of deployments and the fact that not many generative AI systems are in production, one of the first things to focus on is discovery, context, and investing in that capability.
David Puner: Let’s dig into identity more. As we have alluded to, identity seems to be at the heart of all this. You have called AI agents a new class of identity with unique risks. What are the biggest threats you are seeing when it comes to AI agents?
Lavi Lazarovitz: First of all, it will not surprise you to hear that we see identity as a key factor in the AI agent attack surface. An AI agent is a process that relies on the access it has to run and complete the tasks the user asks it to do. Unlike chatbots that rely on their own context and data, AI agents rely on integrations. This is why many attack vectors targeting AI agents eventually hit identity and access.
Looking at OWASP Top 10 for AI agents, they list the top 10 attack vectors relevant for these systems. One is agent behavior hijack. This is an attack where a threat actor manipulates an agent to do something it should not do based on its access. Another vector is memory and context poisoning, where a threat actor injects a malicious prompt into memory. When the agent digests it, it can be manipulated to do something it should not do. Again, the impact depends on the access the agent has.
Another example is supply chain vulnerabilities. In this context, supply chain means threat actors manipulating the code for the model itself to make it behave in unintended ways, again hitting the privileges and access the agent has to compromise data or the network.
Lastly, we looked specifically at tool misuse.
David Puner: Your CyberArk Labs team recently demonstrated how a tool misuse attack works. What happened, and why is it important?
Lavi Lazarovitz: Based on this research, we learned how important, how vital, and how critical identity is for each of these attack vectors, including tool misuse and exploitation. It may seem unrelated to identity at first, but I will give the example of the vulnerability we found to demonstrate this.
We looked at an AI agent used by a financial service. I will not mention the name because we are still in the disclosure process. It is a known financial service. Their agent allows a vendor to list recent orders or create invoices.
The attack starts with an attacker placing an order, a small order with a small tweak to it. The attacker adds a malicious prompt to the shipping address field of the order. Once the order is made, that malicious prompt is stored in the database under the address field.
Later, when the vendor asks to list recent orders and shipping addresses, the AI model digests the malicious prompt. Instead of only listing orders, it also does something else the attacker asked it to do.
In our case, the malicious prompt manipulated the model to use another tool, not the order tool but the invoice tool. The prompt asked the model to fetch sensitive information like vendor balance and bank account details, add them to an invoice, and send it to the attacker.
This attack was possible because of two things.
Lavi Lazarovitz: First of all, it was possible because the prompt went through the shipping address. If you had control there, filtering this malicious prompt, then the attack would not have gone through. The second reason the attack succeeded is because the AI agent had access to the invoice tool, but it did not need that access. It only needed permission to list the orders. This, I think, highlights how a tool misuse and exploitation attack vector leverages permissions and entitlements of the AI agent to exfiltrate data. It abuses the access and exfiltrates data out. And this is just one example out of many attack vectors we have looked into across several AI agents, not only this one, that highlights how vital access is for AI agents. Because these AI agents live on access. They do not operate on their own.
David Puner: An attack demonstration like that, which is something you and your team do commonly, thinking like attackers and putting yourself in their shoes, what are the lessons you take from it? And are they lessons for developers or beyond developers? What do we learn from this?
Lavi Lazarovitz: One of the things we learn is that the entitlements and permissions granted to an AI agent define what the impact of an attack will look like. I think we also covered this in the equation before. Looking at risk, the number of agents defines the number of opportunities threat actors will have, and the permissions and entitlements define the scope of access and potential impact of the attack.
We have seen this with attacks not even related to AI agents. We have seen it with IT admins. We said the same thing. The more access they have, the more potential impact an attack could have. I think we are saying the same here. We need to reduce the number of opportunities and reduce the AI agent attack surface in general. Identity security controls play a vital role.
David Puner: Beyond tools, AI agents introduce new attack surfaces. How does the attack surface for AI agents differ from traditional software or automation?
Lavi Lazarovitz: I love this question.
David Puner: I hear it all the time.
Lavi Lazarovitz: And it shows me how much we are all looking for the things that are different. There are differences, but there are also similarities between AI agents and other services. I will share my perspective.
First, the similar thing between AI agents and other services is the machine identity attack surface. The AI agent needs access to the order database. It needs that access, and that access is created based on some sort of credentials. We have seen this machine identity attack surface in many other automation environments and services. This part is similar.
The new attack surface that comes with AI agents and bots is the prompt itself. In our attack on the AI agent of this financial service, we used the prompt to bypass the model’s security, manipulate the model to do something it should not do, and send the invoice to the attacker.
Lastly, there is a new attack surface created because AI agents, in some cases, act as the assistant of the user. It is more of an internal threat. It is not malware coming from outside or downloaded by the user. It is a rogue admin situation. There is now an AI agent within my browser or my IDE where I develop code, and it runs in my context and uses my permissions. This changes things with AI agents. They operate on behalf of the user and introduce an attack vector that was not possible before because existing controls like browser security controls were not designed for this.
So to your question, there is a similar attack surface, which is the machine identity attack surface. There is a prompt attack surface, which is new and applies to bots as well. And there is a new attack surface for AI agents that operate on behalf of the user from a wider context. I think the perfect example is AI agents that run within AI browsers like OpenAI’s Atlas browser or Cognition’s Comet.
David Puner: AI browsers introduce new attack surfaces. How should organizations think about the different types of AI agents and their risk levels? Are there types that need more urgent attention or tighter controls?
Lavi Lazarovitz: What we can say is that agents with more access need more urgent attention and more security controls. For example, an AI agent with access to the entire code repository needs tighter security controls. Zero standing privilege access for this agent will minimize the attacks. A threat actor will have a more difficult time compromising this key or using it when we limit the attack surface and sensitivity. So to your question, the AI agents that have wider access to our systems need urgent attention.
David Puner: You mentioned zero standing privileges, and CyberArk has a new solution called Secure AI Agents that will be generally available at the end of December. How does its approach align with that, and how does it fit into the bigger picture of securing Agentic AI?
Lavi Lazarovitz: The approach CyberArk takes for securing AI agents is securing the access, entitlements, and permissions that these AI agents have to organizational resources. Our assumption is that threat actors will target these agents to abuse the privileges they have, and the security approach is to minimize that risk. Minimize it by minimizing privileges or the time window a threat actor has to compromise them.
The new solution targets exactly that. It also leverages the commonly used Model Context Protocol to bridge between the AI agent and the resource the agent needs access to.
David Puner: What is the Model Context Protocol for listeners who may not be familiar with it?
Lavi Lazarovitz: As I mentioned before, AI agents have three different modules. The tools module is one of them. When developing agents, there are two options. The first option is for the developer to build the integration with the database, cloud service, or code repository directly, including injecting the credentials into the agent.
The other option is to use the Model Context Protocol, which abstracts this integration. It handles the integration outside the agent. Any interaction with a database, resource, code repository, or cloud platform is implemented with this MCP. The agent does not see that integration. It helps with security because all access goes through MCP. It provides visibility for security teams. It allows credentials and entitlements to be injected only for the time window the agent uses them. So MCP abstracts the tools module for the AI agent and makes it easy to use many integrations without building them directly. For security practitioners, it creates visibility, a secure channel, limits credential exposure, and limits privileges granted to agents.
David Puner: Can you give us a quick overview of what CyberArk Secure AI Agents is and what it does?
Lavi Lazarovitz: The solution aims to secure AI agents from four different perspectives. The first is visibility and discovery, knowing what agents are in the environment. The second perspective is access. These agents need access to databases, services, and so on, so the second perspective is providing that access in a secure way.
The third perspective is monitoring what happens when an agent interacts with a service to identify if the agent goes wrong or if everything is okay. The last perspective is the whole lifecycle, onboarding new agents and terminating old ones. These are the high level goals for the solution. Practically speaking, CyberArk aims to create a gateway for these agents where they can interact with the gateway, get access, and provide visibility for security teams while mitigating all the attack surfaces and attack vectors targeting AI agent identities and access. This is a very high level overview of the solution.
David Puner: Very high level overview, but this is a very big deal.
Lavi Lazarovitz: It is huge because we expect, and many analysts we talk with say, that by 2027 there will be thousands of agents, about three times more than we have now. All those agents have machine identities. They need access. They automate processes in production, so the attack surface goes through the roof. Being able to leverage the technology and do it in a secure fashion is huge.
David Puner: How have you and your CyberArk Labs team been involved in the development of Secure AI Agents?
Lavi Lazarovitz: To be honest, we have an interesting job here.
David Puner: Yes, you do.
Lavi Lazarovitz: One of the things we do is first understand how it works and what the attack surface looks like, how threat actors will leverage it. One of the first things we did was implement or deploy AI agents using MCP and using the SDKs and LLMs out there. All the good ones you have heard about. We see how it works and then attack it.
We implemented an environment and automated services we use in Labs, targeted it to abuse the privileges, and manipulated the LLM to do something it should not do. Then we worked with our R and D team to make sure the CyberArk Secure AI Agent gateway mitigates all those vectors we researched. We had a very cool job here. First, touching the technology, which is exciting. Second, understanding the attack surface. And third, working with R and D to ensure all the attack vectors mapped by OWASP are covered.
David Puner: The solution will be generally available in December.
Lavi Lazarovitz: It will be available in December, and we expect that at the upcoming CyberArk Impact next year, we will talk a lot about how the solution works and give examples of how customers use it.
David Puner: You mentioned earlier that visibility and limiting privilege are crucial. What is an example of how that plays out for organizations using Secure AI Agents?
Lavi Lazarovitz: For an organization using Secure AI Agents, one of the first things the identity team and security team will do is map the environment and map the agents. One of the first things the Secure AI Agent solution allows is to map and understand what is out there. Visibility is the first step.
The second step is allowing these agents to access resources and automate production services. Many organizations exploring AI agents today are concerned about actual integrations into their production systems, Salesforce, databases, and so on. They have good reasons. When we look at other machine identities and services, we scrutinize how access is done. We do not allow credentials to sit there. We do not allow overly permissive access because the attack surface is huge.
One of the things we are seeing is hesitation to move this technology into production. So we provide visibility to show customers and partners what agents and technologies they are using. The second, and most important part, is providing secure access for these agents to resources so organizations can actually see automation come to life in their environment. There are a lot of risks and concerns that are completely justified. We are seeing organizations explore what solutions are out there to allow them to safely leverage AI technology.
David Puner: What is your favorite AI agent?
Lavi Lazarovitz: To be honest, I am biased. One of the things we are looking at, and I think we discussed this last time on the show, are AI agents that do research work. They test products from a security perspective, run automation, and summarize leads. There is huge promise for researchers. In many cases we do not talk about it, but we have a daunting job. Millions of lines of code and identifying vulnerability patterns can be very tough. To your question, an AI agent doing security research work is one of the things we are really excited about.
David Puner: You have said that every attack vector eventually comes back to identity and entitlements. Why is identity security, especially privileged access, so important for securing AI agents? And how can strong identity controls help organizations mitigate or contain these emerging threats?
Lavi Lazarovitz: We looked at the OWASP Top 10 and talked a bit about the different attack vectors. Based on our research and analysis of each of those attack vectors, the scope of access defines the potential impact of an attack. If you limit the access, you limit the potential impact. This is why identity and the privileges and entitlements we discussed are critical.
In the attack we implemented in our Lab targeting the financial service, we saw that we could mitigate the threat by filtering the prompt, and we could mitigate it by limiting the access of the agent so it only had the permissions it needed. Eventually, this defense in depth approach allows security teams to mitigate attack vectors targeting AI agents. One of the things we learned is that AI agent identity appears in every attack vector, which is why it is so vital in this defense in depth approach.
David Puner: Why are traditional machine identity security controls not enough for Agentic AI?
Lavi Lazarovitz: Because these AI agents are not only machines. They do have a machine aspect, where we talked about API keys that AI agents need, but the agents that come with AI browsers or IDEs are basically replicas of us. They operate on behalf of the human user. So the attack surface is more like a rogue developer or a rogue admin rather than a machine identity attack surface.
Preventing or mitigating access to secrets will not help when the AI agent operates on behalf of the user, because the AI agent has the same access as the human user. This is why machine identity controls alone are not enough. We need machine identity controls, and we also need in-session controls commonly used for human users and IT admins. We need a combination of the two.
David Puner: Controls are one piece. What about frameworks and collaboration? How are regulatory and governing bodies approaching frameworks for AI agents, and is the pace of AI evolution making it harder for organizations to keep up?
Lavi Lazarovitz: Definitely. The pace is so high that it is difficult for both security teams and technology teams. There are many technology stacks out there. So yes, it is difficult. In terms of standards, NIST looks at different architectural designs for AI agents. For example, they differentiate between a sequential architecture, where a few agents operate one after another, and a hierarchy architecture, where one orchestrator manages many agents.
For each architecture, there is a different weight for the attack vectors highlighted by OWASP. The hierarchy architecture will probably suffer more from tool misuse and exploitation attack vectors, because the orchestrator will have many tools, and if an attacker poisons one of those tools, they might exploit that vector. So NIST looks at different architectures and their attack surfaces, and OWASP has listed the top 10 attack vectors worth examining.
David Puner: Let us get practical. Given that identity is the foundation for securing Agentic AI, what capabilities should organizations look for?
Lavi Lazarovitz: We can divide the functionality into four areas. The first is identifying all the workloads and AI agents, whether they are in-house or on cloud platforms. Visibility and context are the first step. The second step is placing the right security controls to limit exposure of the credentials the agent uses or the entitlements it is granted.
The third piece is being able to identify an ongoing attack, threat detection and response. Once we have identified the agents and created a good security posture, we need to detect when something happens, whether it is an attack or a rogue agent doing something it should not do.
The last area is compliance and lifecycle. Creating and terminating agent identities and getting the right auditability and traces for teams to analyze. These four capabilities are the most critical and the ones we are looking at under the securing agent initiative.
David Puner: As far as prioritization, what should organizations prioritize so they can keep up with threats and build a resilient foundation for the future?
Lavi Lazarovitz: One thing we learned from cloud and CI CD evolution is that it needs to be easy. It needs to be easy to integrate these tools. Security tools need to be easy for developers to integrate with the resources, cloud platforms, and repositories. If it is easy, it will help security teams get visibility and minimize risk.
The second thing for resilience is having comprehensive security controls. We are in the early stages of AI agent technology. By 2027 we will see more Agentic AI than chatbots. As of now there are more chatbots than agents, but that balance will shift. Needs will change. Having comprehensive controls that cover on-prem and cloud agents, machine identity controls, and in-session human-like controls will be critical during the AI revolution.
David Puner: For those just starting out, where should they begin? Any advice for organizations experimenting with AI agents?
Lavi Lazarovitz: Based on our experience, it worked well when we started with processes where we already had simple automation. In CyberArk Labs, we were eager to use AI in many places, and it usually worked when we already had a process partially automated. That means the processes were in place, and we were just adding a more capable tool to take it further.
In many other cases, trying to automate a process that has been manual for years is not a simple first step. At least in my opinion, starting with something already automated and adding the AI agent to extend the automation worked best.
David Puner: Lavi, there is so much to talk about and it is moving fast. This is the first time we have spoken since March, and looking back it feels like so long ago. We had so much to get through that I did not even ask you at the top. How are you doing? What is new? I saved it for the end so we could get everything in. It looks like it is already dark out there.
Lavi Lazarovitz: It is already dark. It is almost winter. It is November, but it is not winter in Israel. To your question, since we talked last time, so much has changed. There are bigger things happening with CyberArk and Palo Alto that we all heard about. And in Israel, the war that ended, or is very close to ending. A lot has happened, and we are optimistic.
David Puner: A lot going on offline and online. As always, really good to see you and speak with you. We will have to get you back on soon, because in two months a lot will change again.
Lavi Lazarovitz: Definitely.
David Puner: Excellent. Thanks Lavi. Thanks for coming on.
Lavi Lazarovitz: Thank you very much, David.
David Puner: There you have it. Thanks for listening to Security Matters. If you liked this episode, please follow us wherever you get your podcasts so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review. We would appreciate it very much, and so will the algorithmic winds.
Drop us a line with questions or comments. And if you are a cybersecurity professional with an idea for an episode, send us an email. Our address is SecurityMattersPodcast, all one word, at cyberark.com. We hope to see you next time.