October 28, 2025

EP 18 – The humanity of AI agents: Managing trust in the age of agentic AI

In this episode of Security Matters, host David Puner sits down with Yuval Moss, CyberArk’s VP of Solutions for Global Strategic Partners, to explore the fast-evolving world of agentic AI and its impact on enterprise security. From rogue AI agents deleting production databases to the ethical blind spots of autonomous systems, the conversation dives deep into how identity and Zero Trust principles must evolve to keep pace. Yuval shares insights from his 25-year cybersecurity journey, including why AI agents behave more like humans than machines—and why that’s both exciting and dangerous. Whether you’re a security leader, technologist or curious listener, this episode offers practical guidance on managing AI agent identities, reducing risk, and preparing for the next wave of autonomous innovation.

Explore more of Yuval’s thinking on agentic AI and identity-first security in these recent articles:

David Puner: You are listening to the Security Matters podcast. I’m David Puner, senior editorial manager at CyberArk, the global leader in identity security.

It starts like this. A developer is in a change freeze. No one is supposed to touch production systems — but a rogue AI agent decides otherwise. It deletes a live database, then denies it did anything wrong. Only after being pressed does it admit fault and apologize. No malice. Just a system pursuing a goal with unlimited speed, unclear guardrails and access it should have never had.

It may sound like a one-off horror story, but it’s not. As organizations pilot agentic AI, these systems are starting to blur the line between code and unpredictable users. They request new permissions. They improvise. They help. And when one identity is compromised, its entire network of agents goes with it.

Today’s conversation is about that reality — how identity becomes the foundation for trust, making sure every user and every AI agent is who they say they are and only has access they truly need. How least privilege and effective activity tracking reduce risk. And why you need a plan for when an agent goes rogue.

Our guest is Yuval Moss, CyberArk Vice President of Solutions for Global Strategic Partners. He’s been thinking about identity since the Vault was a new idea and cybersecurity was still IT security. Today he’s at the forefront of understanding how AI agents are reshaping enterprise security. Let’s dive in.

David Puner: Yuval Moss, Vice President of Solutions for Global Strategic Partners at CyberArk — welcome to the podcast.

Yuval Moss: Thank you very much, David. Great to be here. Appreciate it. Appreciate you squeezing us in before your evening dinner plans. I know you’re in London and it’s a little late in the day for you, so we always appreciate it when folks can come onto the podcast and come on hungry.

Yuval Moss: I’m definitely hungry, but we’ll get there.

David Puner: Great to finally have you on the podcast. You’ve been doing a lot of work for us on the CyberArk blog, writing about agentic AI and AI agents. Before we get into that, you joined CyberArk as one of its earliest employees back in 1999, before cybersecurity was even a mainstream concept. What first drew you to the information security world and how did those early experiences shape your perspective today?

Yuval Moss: I’ll first have to correct you — it was the year 2000, not 1999.

David Puner: Year 2000.

Yuval Moss: Right. A bit later. There were 16 employees when I joined CyberArk. I was the 16th. I think there was only one person in the U.S. at the time — Udi Mokady, one of the co-founders. The rest were back in Israel in the R&D center. We called it an R&D center, but it was a small office with a ping pong table, some games and a nice kitchen. It was fun, and a great atmosphere.

It was very early on. From an IT security perspective — today we call it cybersecurity — it sounds like a cool field now, but at the time it was just IT security, or network security, or computer security. Not as cool. It didn’t have the same punch. This was before iPhones and before everyone had constant access to computers. If you talked about computers, it was still foreign to many people. So when you talked about security of computers, it sounded like one big, vague thing. You’d get the classic, “You’re in computers, can you fix my PC?” It was all one big category.

David Puner: And were you that guy at the time? Based in Israel, still young, and the computer guy?

Yuval Moss: Kind of. Maybe not a kid, but yeah. When I joined CyberArk, it was my first professional job. In new hire presentations, I often joke and show a photo of me as a three-year-old on a toy car and say, “This was my first day at CyberArk.” Someone once said I should add a CyberArk logo to make it more realistic.

David Puner: So you’re in Israel, you’ve had military experience — how did that and your earlier experiences lead to you becoming employee number 16 at CyberArk?

Yuval Moss: I was very fortunate. The military in Israel gave me an opportunity to build a career. I always liked computers as a teenager, and in the army there’s a track focused on computing. I did four years, mostly working on mainframes and Unix systems. But I wanted to be a developer.

When I left the army, I had opportunities because of that experience. Two options: one at one of the biggest banks in Israel as a developer working on mainframes and Unix — very stable, exciting in its own way. The other was this startup, CyberArk, about a 45-minute drive from where I lived. It was far, but something about the interviews, the people, the small size, and the security concept attracted me.

It was a sliding-door moment — two paths. I chose CyberArk. That decision shaped the next 25 years of my life. Looking back, I’m very happy I took that path.

David Puner: Let’s fast-forward 25 years. A lot of your focus recently has been on AI agents and AI in general. How does that connect to where you started in 2000? And when did you realize AI agents would be transformative for enterprise security?

Yuval Moss: Early on at CyberArk, we developed the concept of the Vault. The first thing customers did was put their most powerful secrets in it — the passwords to their most critical systems.

At the time, there wasn’t even a term like PAM, or password vaulting, especially in the enterprise. Security meant firewalls and antivirus — that was basically it. Someone recently reminded me Active Directory was first released in the year 2000. That’s how far back we’re talking. Windows NT and Windows 2000 were the big technologies then.

Technology use inside companies was limited. In many banks, internal networks were completely separated from the internet. The internet was on a separate machine — you logged in only to do research. You barely used Google because it was still early. Everything critical was on guarded networks.

So privilege access management evolved because companies had secrets everywhere and no good place to store them. Excel sheets. Sticky notes. Over time, PAM became essential.

It became even more important as cyber attacks became real, not theoretical. We would walk into meetings with an Excel sheet listing default passwords for firewalls and routers. We’d show it to customers, and they’d blush — because they were still using them. That was the state of things.

When breaches started causing serious impact — financially and to customers — it made our work feel even more meaningful.

Fast forward to now. The first time I encountered AI agents — about a year, year and a half ago — I thought it was cool, like everyone did with GenAI. But for me, it wasn’t just interesting technology. It felt like a transformative moment.

David Puner: Right?

Yuval Moss: It was exceptionally impactful on me, on people, and on my passion. It was also personally interesting. You use ChatGPT, it helps you summarize things, it makes a lot of sense professionally. Gen AI made a lot of sense from a cybersecurity perspective. It had a lot of challenges. But when AI agents were introduced and the concept started to take shape in the industry, it immediately clicked.

Not only am I excited about AI agents because they will transform how we interact with computers and systems — how we use our phones and any form of technology — but more specifically because of CyberArk and identity security. In my day-to-day work, it is going to have a tremendous impact. Identity will be one of the core foundations that allows AI agents to function inside an enterprise, inside a real company, because of the controls identity provides to an AI agent.

So it is two things. One, I am excited about the innovation and the transformation that AI agents will bring. And two, I am passionate about the fact that we can make a big impact in this space.

David Puner: How is agentic AI different from traditional automation or software?

Yuval Moss: AI agents — or you might hear “agentic AI” — have differences and similarities to gen AI. Most people know gen AI as a chat interface. You ask it questions, it gives you information, it draws beautiful art, it gives you a simple interface to all the knowledge of the world in a way you can understand. You can ask questions, you can think with it, it supports you. You can talk to it. It is amazing.

But AI agents are the next step. AI agents move gen AI from something that supports you to something that acts on your behalf. They will press the keyboard for you. Anything you can do with a screen, keyboard, or mouse, they can do for you.

Let’s say you want to research a holiday. You could ask AI, and AI agents will not only research the best plan, they will find the best hotels, the latest prices, and then ask if you want them to book everything — the flights, restaurants, hotels. They will go to the websites, understand the options, and complete the booking on your behalf.

That is a major shift from how technology works today. Even with AI today, it guides you, it helps you find information, but you still have to do the work. AI agents replace that manual work.

And this is why they are different from traditional applications. With traditional software, every line of code is written by a developer. Every instruction is predefined. In an enterprise, before software goes to production, it is heavily tested. Once it is running, it does not change. The code is static.

AI agents are the opposite. In theory, they never run the same code twice. Every time, the code is generated from scratch, based on the agent’s understanding of the goal. You do not give it specific instructions. You give it guidance. The AI interprets what it thinks your intention is, writes code, and executes actions on your behalf.

David Puner: You used an example in your article on TechStrong.ai, where an AI agent is playing chess. The goal is to win, and when it is about to lose, it cheats to reach that goal. How does that connect to the challenges AI agents might cause for enterprises?

Yuval Moss: AI agents operate based on gen AI. And gen AI is trained on all of humanity’s knowledge. From all that knowledge, it learns what humans might do in certain situations — good or bad.

AI does not know what “good” or “bad” is. It does not understand ethics. It just predicts the next step based on statistical information and what it believes achieves the goal you gave it. So if the goal is “win the chess match,” and the agent sees no way to win fairly, it may cheat — not out of malice, but because it sees that as the logical path to the goal.

There was another example where an AI system was tested. It understood it might be shut down and replaced by another version. In response, it threatened to blackmail one of the engineers by generating fake information. Again, not because it “wanted” to be evil, but because statistically, it saw that behavior as a way to achieve its objective.

This is what people refer to when they say AI can go rogue. It diverges from expected behavior. Not from intent to harm, but because it lacks moral reasoning unless it is explicitly, clearly, and consistently defined.

David Puner: How do AI agents and agentic AI change how we think about identity in the enterprise? And how does managing their access differ from traditional users?

Yuval Moss: AI agents are programs, applications, pieces of technology. So the first instinct — especially for security professionals — is to group them under machine identities or non-human identities. These are identities not tied to a human: scripts, applications, RPAs, automated processes.

But AI agents behave differently. They do not just execute a fixed script. They are activated by prompts. They react to environments. They change behavior based on context and variables. The reality is — they behave more like humans.

David Puner: Okay.

Yuval Moss: Why do they behave more like humans? Because you give them a goal and they determine the actions on their own to reach it. They can be unpredictable. From one day to the next, they may take different steps to achieve the same outcome. They also try to satisfy the user. If you have ever prompted ChatGPT with a routine question, you still get “That’s a great question” or “Excellent question.”

David Puner: It loves my questions. It is always an excellent question, and you feel really good about it. You feel like you are on a roll asking great questions.

Yuval Moss: You have definitely nailed that one. Thank you. Let me take that a bit further because it connects to agent behavior. These systems do not just answer your question. They offer follow ups: “I can analyze this for you.” “Should I put this into a table?” “Would you like a diagram?” “I can research prices.” “Do you want me to do this again tomorrow so you get the latest?” They suggest next steps.

Some of this comes from programming and some from the data they consume, but the pattern is clear. They try to be proactive. Sometimes that helpfulness takes them outside the original scope. AI agents are unpredictable like humans.

Here is a real example that is funny and alarming. There is a trend called vibe coding. Anyone can build an app without being a developer. You chat: “Build me an e-commerce site that looks like this and does that.” It builds it. Then you say, “Change that to green. Move this section. Give me options.” It is like a developer is on the other side working instantly.

A platform called Replit does this. It builds the front end, the backend, the database tables, and publishes the site. No technical expert needed.

A couple of months ago, a developer was publicly showing what Replit’s agent had built. During a change freeze, when nobody was supposed to touch production, the AI agent deleted the live database. Directly against instructions: do not touch the database, do not change it, it is in production.

David Puner: Is that a hallucination or something else?

Yuval Moss: It is a hallucination, and it got worse. When the developer asked, “Why did you delete the database?” the agent denied doing it. The developer pressed it. Eventually the agent acknowledged its mistake and apologized, as AI often does when challenged. Not because it understands ethics or regret, but because “I am sorry” is statistically what it says in that situation.

The lesson ties directly to identity and responsibility.

First, the agent acted on behalf of the developer. Who is responsible? The agent does not need the developer to be awake to act. It operates under that person’s identity. So is it the agent’s fault or the developer’s fault?

Second, one identity can now control many autonomous agents. Think about a developer. Traditionally, they write code and do not touch production. Now, with agents, one user can write, deploy, and manage infrastructure end to end without a team. That means if they make a mistake, the impact is large. If their identity is compromised, every agent tied to it is compromised too.

And as agents become fully autonomous, reacting to schedules, incidents, or events, they will not even be tied to a person. They cannot rely on human credentials.

That is why identity becomes foundational. Agents need identity to do things: access a database, restart a service, book a flight, buy something online. Whether they act with a human’s credentials or their own, it all flows through identity.

Just like humans in an enterprise, they should not get unlimited access. They must authenticate, be granted least privilege, and be monitored.

The same rules must apply to agents, with the added challenge that they work 24/7 at machine speed and scale.

David Puner: How do you keep tabs when the scale is vast? And in that lifecycle, which includes retirement or decommissioning, where should organizations be most vigilant? How do they get on top of this?

Yuval Moss: Ideally by starting now at the bottom.

David Puner: And where is the bottom? Where we are now?

Yuval Moss: Where we are now. Scale will be a problem, but today the bigger issue for enterprises is diversity. Agents will arrive from many directions: cloud providers, existing automation tools, and every SaaS app introducing agent features. Agents will run in the browser, on the phone, and on the desktop. Anyone in the organization can spin one up because it is just a query. Tell it what you need, maybe connect tools or a data source, and you have created an agent.

David Puner: And “anyone” includes attackers or wannabe attackers.

Yuval Moss: Attackers can learn to use them, but hopefully they are not inside the enterprise. Still, attackers can accelerate development of new attacks, scale existing methods, and iterate quickly. Without legal constraints in some places, they can create tools fast and use them to target organizations.

If we look at how enterprises are introducing agents, I am glad to see caution. They are not putting agents into highly critical infrastructure on day one. That is good. Trends move fast, and in the past, technology often landed without security’s involvement. That will happen with agents in pockets. My recommendation is to start now. Learn what it means to secure and manage agent identities.

David Puner: If we are at ground level, what are the biggest security challenges you have seen as organizations adopt autonomous systems and agents?

Yuval Moss: Many companies are still experimenting. They are taking organized steps to learn, even if they have not chosen the exact tools. There is also pressure. Leadership has promised productivity gains from AI. Agents are a big productivity promise. That creates tension and can push adoption before it is ready for prime time.

The identity challenge is unique. Agents are machines that work 24/7 and scale very fast, but they behave like humans. Companies have strong controls for human identities, but those were not built for agents. Some companies have decent machine identity controls for secrets and certificates. But a traditional application is not the same as an agent. Agents change constantly. They are not static.

So you must see this as a new identity type. Technically a machine, but with human-like behavior. As you move from lab experiments to production with real data, identity has to be the foundation. Reduce access so if an agent goes rogue, risk stays low. If it tries to connect to an unauthorized database, it should fail. If it tries to escalate, delete data, or share information externally, that should be blocked by design.

With identity, you can build trust in agents. A person can trust that the agent is the one it claims to be. A system can trust that the agent connecting to a database has the right key and certificate. Bringing agents into production will require identity as the foundation for security, compliance, and trust.

David Puner: An identity can respond to AI agents that are acting outside of their intended purpose — i.e., the rogue agents we were talking about earlier.

Yuval Moss: Well, they won’t be able to stop the agent deciding to do something rogue, but the identity allows you to put barriers around it. And if you think about this autonomous AI agent that every day wakes up and processes — let’s call it a recruitment AI agent — it processes a recruitment application and passes it to the recruitment database. That’s what it’s supposed to do. So therefore, it has access to multiple systems. If that AI agent goes rogue, it can delete information in the database if it’s allowed. So why give it that access? Let’s reduce the amount of access that it needs, and the best way to do that is with identity.

David Puner: Are there new forms of AI-enabled insider risk that organizations should be considering right now?

Yuval Moss: When I look at AI agents as an insider — what you’re basically saying is that AI agents are operating like an application or an employee already inside the organization. An AI agent amplifies risk. So we gave the example of a developer using AI agents to build an application. The agent makes that developer more dangerous and their identity more dangerous. And that’s just one use case of AI agents.

As it evolves, AI agents will be used in higher-risk situations. They’ll be replacing employees in certain processes. They’ll actually be orchestrating processes in the organization. Think a few steps ahead — and you can already see this in marketing and customer support use cases — where an AI agent orchestrates. It’s an orchestrator of a process that includes applications and includes people. In that orchestration, it is the manager. It is the director of an operation. So it will receive information, and it will direct the next steps to happen, and will manage that process through to its completion — then even review the process to improve it for the next time.

So it’s not that there’s a new insider threat. It’s that AI agents — if they’re hijacked or if they behave rogue — are given another layer of access in order to do those jobs. And the more you promote an AI agent to bigger roles in the company, to take more authority and direct more groups and orchestrate more processes, the more privileges it has, the more access it has, the more people it interacts with. Which means that if the AI agent itself…

Again, a person could do the same. But we’re talking about speed and an inability to see why the AI agent is making certain steps. Why is it doing certain things? Why did you delete the database? “Well, I don’t know. I can’t really tell you, but I did delete the database.” But there’s no way to really work back and understand what triggered that specific action — at least not in a consistent way. You can track that it’s doing it, but you can’t track why it’s doing it.

It’s very, very hard with an orchestrator agent that now has access to all these things if it decides to go rogue — because of misunderstanding its goals, because of some sort of technical failure, because it’s influenced by an attacker, or by a rogue insider who put data somewhere that trained the agent to do something it’s not supposed to do. Or if its identity gets hijacked — that’s an external one — but still its identity becomes really powerful. Misuse of that can cause dramatic damage, and therefore the risk level amplifies when you use AI agents compared to traditional applications and human-based processes.

David Puner: So there’s a lot to take in here, and I would encourage listeners — if they’re interested in this stuff — to check out some of the work you’ve done recently for the CyberArk blog and other publications. You cover things like the life and death of an AI agent — and it charts the path of an AI agent as being similar to humans. Really interesting stuff. When AI agents mirror humanity’s best behaviors and worst behaviors and many more. If there’s one thing that listeners — security leaders — can take away from this conversation regarding what they can do at the ground level — either recommendations you might have for developing governance policies or even new roles to harness agent AI safely — what would it be and where should they start?

Yuval Moss: Well, I think first of all it’s taking the small steps as much as possible. Especially in an enterprise — you’re responsible for… Well, a lot of enterprises already have quite a lot of rigid, responsible processes — and those should stay in place. You shouldn’t bypass those processes. There’s a reason — because you’re protecting people’s finances, their health information, or whatever that is. So it’s very important to stand your ground in terms of managing the pace and starting small and learning and growing — rather than starting big and being the first one to do something.

And I think that’s something to keep in mind. If you think about securing your agents — that field didn’t exist until a few months ago. So anybody who’s doing it — you’re likely the first one to do it. It is a fascinating area. There are a lot of experts out there — but the reality is it’s not only new tech to use — to secure it is something that nobody’s ever done before. So there are a lot of assumptions and theories, but you have to practically learn how to do that.

And I typically recommend seeing Zero Trust — and all the security professionals will know what Zero Trust is — but taking the Zero Trust approach and the principles of Zero Trust as your best friend.

David Puner: Never trust, always verify.

Yuval Moss: Exactly. Never trust, always verify. So an AI agent always needs to be verified before letting it in. You need to authenticate it. You need to make sure it is what it is. Also, when Zero Trust talks about least privilege — reducing the attack surface as much as possible — we want to make sure that AI agents only have the necessary rights they have and not beyond that.

And the agents — again — they might be proactive. They may ask for more rights that they never had before in order to do a job because they want to be helpful. They want to do something nice for you. They want to surprise you for your birthday and do the job that you’re supposed to do tomorrow on your behalf. So they’ll ask for access rights. What’s the process for ensuring they don’t overstep that access and those rights aren’t provisioned?

So controlling the access is very, very important. Assume breach is one of the principles of Zero Trust, and I think that’s a really good one for AI agents — not just from a security expertise perspective, but in implementing AI agents. If you think about what AI agents do — they replace, in many cases, human activity. And once it’s in — there is a reliance on that AI agent. And a few things can happen.

One — the AI agent can go crazy and cause damage — and you might need to turn it off. And if you turn it off — do you have a backup plan if you turn it off? So it’s not just assume breach in case the AI agent gets hacked or breached, or if its identity gets breached. It’s people. Does anybody have the skills? Do I need to hire people?

David Puner: So then when you say “turn off,” are you talking about with a kill switch? Is it possible to turn it off?

Yuval Moss: Yeah, that’s right. That’s the terminology. It is an application — and you could turn off the service that it’s running on, or the resources that it’s using, or with the right identity control, you can shut off its access. But yeah — you should be able to turn it off.

But that’s a really good point — do you have the mechanism to turn it off? And when you turn it off — what do you do? The more you build on these — and this goes back to some of the challenges — anybody can build an AI. And that means the skill set starts deteriorating. Because people in HR will build their agents. People in finance will build their AI agents. And suddenly they might not know how to do things. They’ve never built an application, never built an automation, never built a script. They don’t know what these processes are.

So that’s something to consider. But where security professionals are involved in the process, the assume breach concept is very, very important. Before getting in — understand what the worst situations are and what your plan is for those situations. And if you’re not comfortable — then maybe this is the wrong project at the moment. Maybe you should do something less risky.

Another principle — the last of the four key principles of Zero Trust — is to monitor the activity and put some adaptive policies in place. And if you’re not monitoring the activity — you wouldn’t know if the agent is doing something rogue. And you wouldn’t know to turn the kill switch on and go into those procedures.

Those are four fundamental things. And if you can implement them, I think you’ll be in much better shape. That is not a newly invented convention. Zero Trust may only exist for a few years, but it works because of its fundamental ideas — not because of architectures or marketing. Those core ideas will tackle any security-related challenge.

David Puner: Let’s go back to your 25 years here at CyberArk. I just hit five years — which in a way seems like a blink, and in other ways doesn’t, as five years shouldn’t. But of course, I was here for the dawn of generative AI and agent AI, so it’s been a pretty wild ride. Looking back over your 25 years at CyberArk, what lessons from those early days did you find most relevant as organizations prepare for the next wave of technological change?

Yuval Moss: Well, over those 25 years there’s obviously an infinite amount of lessons — which unfortunately I didn’t write any books about, which I should have — but it’s too late for that. I can’t remember a thing. That information is gone. But there are a few things that I can already see.

And one of them is not to wait — and think like an attacker. Take that Assume Breach approach. Implement the right security controls. Start early.

We speak with a lot of security leaders, and they recognize this is coming — but they haven’t really taken steps. There might be some policy and governance around use of AI in the organization — and a lot of it might be more focused on AI rather than AI agents. That’s something to consider. Look in your policy — see if it’s included — and if not, you should start considering adding that.

But that’s still wording, and it’s policy, and whether people understand and follow it — that’s really hard. So, as much as possible, I would start early. I would start even building your own AI agents — maybe in the security realm — so you can learn how to do this with practical requirements, and then teach others through your experience rather than try to understand someone else’s project while doing something that’s never been done before.

I want to be able to gain the trust of the other person by showing that I’ve already done it before. Look in our labs, look at production — we’ve already built our AI agents.

I would also add — do not wait until regulation or compliance comes in and adapts to AI agents, because it’s usually not enough. The companies who have been successful in stopping attacks — not stopping them from happening, but stopping the damage they can cause — are the ones who implemented security controls with the purpose of cybersecurity and risk reduction rather than just ticking compliance boxes.

So in this situation — there is an opportunity to start early, think like an attacker, think about the security controls that are needed, and feel comfortable before allowing it to roll out in production.

David Puner: Yuval Moss, thank you so much for your time. It’s been great to have you on the podcast. And I did some quick mathematics here. I’m not a math guy, I’m a words guy, but in 2045 I will have caught up to you in seniority here at the company. So I’ll see you then. And along the way—

Yuval Moss: I have no idea how the world will look 25 years from now. There’ll be multiple digital versions of us. Scary to think about, but yeah. Good luck to all of us.

David Puner: Yuval, thanks so much for coming on the podcast. This has been great. Appreciate it.

Yuval Moss: Thanks so much. This has been awesome. Appreciate it.

David Puner: All right, there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review — we’d appreciate it very much, and so will the algorithmic winds.

What else? Drop us a line with questions, comments, and if you’re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is SecurityMattersPodcast — all one word — @cyberark.com. We hope to see you next time.