{"id":218026,"date":"2025-10-28T13:31:30","date_gmt":"2025-10-28T13:59:36","guid":{"rendered":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/"},"modified":"2026-04-07T02:17:06","modified_gmt":"2026-04-07T06:17:06","slug":"ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai","status":"publish","type":"podcast","link":"https:\/\/www.cyberark.com\/es\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/","title":{"rendered":"EP 18 &#8211; The humanity of AI agents: Managing trust in the age of agentic AI"},"content":{"rendered":"<p>In this episode of Security Matters, host David Puner sits down with Yuval Moss, CyberArk\u2019s VP of Solutions for Global Strategic Partners, to explore the fast-evolving world of agentic AI and its impact on enterprise security. From rogue AI agents deleting production databases to the ethical blind spots of autonomous systems, the conversation dives deep into how identity and Zero Trust principles must evolve to keep pace. Yuval shares insights from his 25-year cybersecurity journey, including why AI agents behave more like humans than machines\u2014and why that\u2019s both exciting and dangerous. Whether you&#8217;re a security leader, technologist or curious listener, this episode offers practical guidance on managing AI agent identities, reducing risk, and preparing for the next wave of autonomous innovation.<\/p>\n<p>Explore more of Yuval\u2019s thinking on agentic AI and identity-first security in these recent articles:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.cyberark.com\/resources\/blog\/the-life-and-death-of-an-ai-agent-identity-security-lessons-from-the-human-experience\">The life and death of an AI agent: Identity security lessons from the human experience<\/a><\/li>\n<li><a href=\"https:\/\/techstrong.ai\/aiops\/when-ai-agents-mirror-humanitys-best-behaviorsand-worst-behaviors\/\">When AI Agents Mirror Humanity\u2019s Best Behaviors\u2026and Worst Behaviors<\/a>\u00a0<\/li>\n<li><a href=\"https:\/\/www.cyberark.com\/resources\/blog\/the-agentic-ai-revolution-5-unexpected-security-challenges\">The Agentic AI Revolution: 5 Unexpected Security Challenges<\/a><\/li>\n<\/ul>\n<div class=\"transcript\" style=\"white-space:pre-line\">David Puner: You are listening to the Security Matters podcast. I&#8217;m David Puner, senior editorial manager at CyberArk, the global leader in identity security.<\/p>\n<p>It starts like this. A developer is in a change freeze. No one is supposed to touch production systems \u2014 but a rogue AI agent decides otherwise. It deletes a live database, then denies it did anything wrong. Only after being pressed does it admit fault and apologize. No malice. Just a system pursuing a goal with unlimited speed, unclear guardrails and access it should have never had.<\/p>\n<p>It may sound like a one-off horror story, but it&#8217;s not. As organizations pilot agentic AI, these systems are starting to blur the line between code and unpredictable users. They request new permissions. They improvise. They help. And when one identity is compromised, its entire network of agents goes with it.<\/p>\n<p>Today&#8217;s conversation is about that reality \u2014 how identity becomes the foundation for trust, making sure every user and every AI agent is who they say they are and only has access they truly need. How least privilege and effective activity tracking reduce risk. And why you need a plan for when an agent goes rogue.<\/p>\n<p>Our guest is Yuval Moss, CyberArk Vice President of Solutions for Global Strategic Partners. He\u2019s been thinking about identity since the Vault was a new idea and cybersecurity was still IT security. Today he\u2019s at the forefront of understanding how AI agents are reshaping enterprise security. Let\u2019s dive in.<\/p>\n<p>David Puner: Yuval Moss, Vice President of Solutions for Global Strategic Partners at CyberArk \u2014 welcome to the podcast.<\/p>\n<p>Yuval Moss: Thank you very much, David. Great to be here. Appreciate it. Appreciate you squeezing us in before your evening dinner plans. I know you&#8217;re in London and it&#8217;s a little late in the day for you, so we always appreciate it when folks can come onto the podcast and come on hungry.<\/p>\n<p>Yuval Moss: I&#8217;m definitely hungry, but we&#8217;ll get there.<\/p>\n<p>David Puner: Great to finally have you on the podcast. You&#8217;ve been doing a lot of work for us on the CyberArk blog, writing about agentic AI and AI agents. Before we get into that, you joined CyberArk as one of its earliest employees back in 1999, before cybersecurity was even a mainstream concept. What first drew you to the information security world and how did those early experiences shape your perspective today?<\/p>\n<p>Yuval Moss: I\u2019ll first have to correct you \u2014 it was the year 2000, not 1999.<\/p>\n<p>David Puner: Year 2000.<\/p>\n<p>Yuval Moss: Right. A bit later. There were 16 employees when I joined CyberArk. I was the 16th. I think there was only one person in the U.S. at the time \u2014 Udi Mokady, one of the co-founders. The rest were back in Israel in the R&#038;D center. We called it an R&#038;D center, but it was a small office with a ping pong table, some games and a nice kitchen. It was fun, and a great atmosphere.<\/p>\n<p>It was very early on. From an IT security perspective \u2014 today we call it cybersecurity \u2014 it sounds like a cool field now, but at the time it was just IT security, or network security, or computer security. Not as cool. It didn\u2019t have the same punch. This was before iPhones and before everyone had constant access to computers. If you talked about computers, it was still foreign to many people. So when you talked about security of computers, it sounded like one big, vague thing. You&#8217;d get the classic, \u201cYou\u2019re in computers, can you fix my PC?\u201d It was all one big category.<\/p>\n<p>David Puner: And were you that guy at the time? Based in Israel, still young, and the computer guy?<\/p>\n<p>Yuval Moss: Kind of. Maybe not a kid, but yeah. When I joined CyberArk, it was my first professional job. In new hire presentations, I often joke and show a photo of me as a three-year-old on a toy car and say, \u201cThis was my first day at CyberArk.\u201d Someone once said I should add a CyberArk logo to make it more realistic.<\/p>\n<p>David Puner: So you&#8217;re in Israel, you\u2019ve had military experience \u2014 how did that and your earlier experiences lead to you becoming employee number 16 at CyberArk?<\/p>\n<p>Yuval Moss: I was very fortunate. The military in Israel gave me an opportunity to build a career. I always liked computers as a teenager, and in the army there\u2019s a track focused on computing. I did four years, mostly working on mainframes and Unix systems. But I wanted to be a developer.<\/p>\n<p>When I left the army, I had opportunities because of that experience. Two options: one at one of the biggest banks in Israel as a developer working on mainframes and Unix \u2014 very stable, exciting in its own way. The other was this startup, CyberArk, about a 45-minute drive from where I lived. It was far, but something about the interviews, the people, the small size, and the security concept attracted me.<\/p>\n<p>It was a sliding-door moment \u2014 two paths. I chose CyberArk. That decision shaped the next 25 years of my life. Looking back, I\u2019m very happy I took that path.<\/p>\n<p>David Puner: Let\u2019s fast-forward 25 years. A lot of your focus recently has been on AI agents and AI in general. How does that connect to where you started in 2000? And when did you realize AI agents would be transformative for enterprise security?<\/p>\n<p>Yuval Moss: Early on at CyberArk, we developed the concept of the Vault. The first thing customers did was put their most powerful secrets in it \u2014 the passwords to their most critical systems.<\/p>\n<p>At the time, there wasn\u2019t even a term like PAM, or password vaulting, especially in the enterprise. Security meant firewalls and antivirus \u2014 that was basically it. Someone recently reminded me Active Directory was first released in the year 2000. That\u2019s how far back we\u2019re talking. Windows NT and Windows 2000 were the big technologies then.<\/p>\n<p>Technology use inside companies was limited. In many banks, internal networks were completely separated from the internet. The internet was on a separate machine \u2014 you logged in only to do research. You barely used Google because it was still early. Everything critical was on guarded networks.<\/p>\n<p>So privilege access management evolved because companies had secrets everywhere and no good place to store them. Excel sheets. Sticky notes. Over time, PAM became essential.<\/p>\n<p>It became even more important as cyber attacks became real, not theoretical. We would walk into meetings with an Excel sheet listing default passwords for firewalls and routers. We\u2019d show it to customers, and they\u2019d blush \u2014 because they were still using them. That was the state of things.<\/p>\n<p>When breaches started causing serious impact \u2014 financially and to customers \u2014 it made our work feel even more meaningful.<\/p>\n<p>Fast forward to now. The first time I encountered AI agents \u2014 about a year, year and a half ago \u2014 I thought it was cool, like everyone did with GenAI. But for me, it wasn\u2019t just interesting technology. It felt like a transformative moment.<\/p>\n<p>David Puner: Right?<\/p>\n<p>Yuval Moss: It was exceptionally impactful on me, on people, and on my passion. It was also personally interesting. You use ChatGPT, it helps you summarize things, it makes a lot of sense professionally. Gen AI made a lot of sense from a cybersecurity perspective. It had a lot of challenges. But when AI agents were introduced and the concept started to take shape in the industry, it immediately clicked.<\/p>\n<p>Not only am I excited about AI agents because they will transform how we interact with computers and systems \u2014 how we use our phones and any form of technology \u2014 but more specifically because of CyberArk and identity security. In my day-to-day work, it is going to have a tremendous impact. Identity will be one of the core foundations that allows AI agents to function inside an enterprise, inside a real company, because of the controls identity provides to an AI agent.<\/p>\n<p>So it is two things. One, I am excited about the innovation and the transformation that AI agents will bring. And two, I am passionate about the fact that we can make a big impact in this space.<\/p>\n<p>David Puner: How is agentic AI different from traditional automation or software?<\/p>\n<p>Yuval Moss: AI agents \u2014 or you might hear \u201cagentic AI\u201d \u2014 have differences and similarities to gen AI. Most people know gen AI as a chat interface. You ask it questions, it gives you information, it draws beautiful art, it gives you a simple interface to all the knowledge of the world in a way you can understand. You can ask questions, you can think with it, it supports you. You can talk to it. It is amazing.<\/p>\n<p>But AI agents are the next step. AI agents move gen AI from something that supports you to something that acts on your behalf. They will press the keyboard for you. Anything you can do with a screen, keyboard, or mouse, they can do for you.<\/p>\n<p>Let\u2019s say you want to research a holiday. You could ask AI, and AI agents will not only research the best plan, they will find the best hotels, the latest prices, and then ask if you want them to book everything \u2014 the flights, restaurants, hotels. They will go to the websites, understand the options, and complete the booking on your behalf.<\/p>\n<p>That is a major shift from how technology works today. Even with AI today, it guides you, it helps you find information, but you still have to do the work. AI agents replace that manual work.<\/p>\n<p>And this is why they are different from traditional applications. With traditional software, every line of code is written by a developer. Every instruction is predefined. In an enterprise, before software goes to production, it is heavily tested. Once it is running, it does not change. The code is static.<\/p>\n<p>AI agents are the opposite. In theory, they never run the same code twice. Every time, the code is generated from scratch, based on the agent\u2019s understanding of the goal. You do not give it specific instructions. You give it guidance. The AI interprets what it thinks your intention is, writes code, and executes actions on your behalf.<\/p>\n<p>David Puner: You used an example in your article on TechStrong.ai, where an AI agent is playing chess. The goal is to win, and when it is about to lose, it cheats to reach that goal. How does that connect to the challenges AI agents might cause for enterprises?<\/p>\n<p>Yuval Moss: AI agents operate based on gen AI. And gen AI is trained on all of humanity\u2019s knowledge. From all that knowledge, it learns what humans might do in certain situations \u2014 good or bad.<\/p>\n<p>AI does not know what \u201cgood\u201d or \u201cbad\u201d is. It does not understand ethics. It just predicts the next step based on statistical information and what it believes achieves the goal you gave it. So if the goal is \u201cwin the chess match,\u201d and the agent sees no way to win fairly, it may cheat \u2014 not out of malice, but because it sees that as the logical path to the goal.<\/p>\n<p>There was another example where an AI system was tested. It understood it might be shut down and replaced by another version. In response, it threatened to blackmail one of the engineers by generating fake information. Again, not because it \u201cwanted\u201d to be evil, but because statistically, it saw that behavior as a way to achieve its objective.<\/p>\n<p>This is what people refer to when they say AI can go rogue. It diverges from expected behavior. Not from intent to harm, but because it lacks moral reasoning unless it is explicitly, clearly, and consistently defined.<\/p>\n<p>David Puner: How do AI agents and agentic AI change how we think about identity in the enterprise? And how does managing their access differ from traditional users?<\/p>\n<p>Yuval Moss: AI agents are programs, applications, pieces of technology. So the first instinct \u2014 especially for security professionals \u2014 is to group them under machine identities or non-human identities. These are identities not tied to a human: scripts, applications, RPAs, automated processes.<\/p>\n<p>But AI agents behave differently. They do not just execute a fixed script. They are activated by prompts. They react to environments. They change behavior based on context and variables. The reality is \u2014 they behave more like humans.<\/p>\n<p>David Puner: Okay.<\/p>\n<p>Yuval Moss: Why do they behave more like humans? Because you give them a goal and they determine the actions on their own to reach it. They can be unpredictable. From one day to the next, they may take different steps to achieve the same outcome. They also try to satisfy the user. If you have ever prompted ChatGPT with a routine question, you still get \u201cThat\u2019s a great question\u201d or \u201cExcellent question.\u201d<\/p>\n<p>David Puner: It loves my questions. It is always an excellent question, and you feel really good about it. You feel like you are on a roll asking great questions.<\/p>\n<p>Yuval Moss: You have definitely nailed that one. Thank you. Let me take that a bit further because it connects to agent behavior. These systems do not just answer your question. They offer follow ups: \u201cI can analyze this for you.\u201d \u201cShould I put this into a table?\u201d \u201cWould you like a diagram?\u201d \u201cI can research prices.\u201d \u201cDo you want me to do this again tomorrow so you get the latest?\u201d They suggest next steps.<\/p>\n<p>Some of this comes from programming and some from the data they consume, but the pattern is clear. They try to be proactive. Sometimes that helpfulness takes them outside the original scope. AI agents are unpredictable like humans.<\/p>\n<p>Here is a real example that is funny and alarming. There is a trend called vibe coding. Anyone can build an app without being a developer. You chat: \u201cBuild me an e-commerce site that looks like this and does that.\u201d It builds it. Then you say, \u201cChange that to green. Move this section. Give me options.\u201d It is like a developer is on the other side working instantly.<\/p>\n<p>A platform called Replit does this. It builds the front end, the backend, the database tables, and publishes the site. No technical expert needed.<\/p>\n<p>A couple of months ago, a developer was publicly showing what Replit\u2019s agent had built. During a change freeze, when nobody was supposed to touch production, the AI agent deleted the live database. Directly against instructions: do not touch the database, do not change it, it is in production.<\/p>\n<p>David Puner: Is that a hallucination or something else?<\/p>\n<p>Yuval Moss: It is a hallucination, and it got worse. When the developer asked, \u201cWhy did you delete the database?\u201d the agent denied doing it. The developer pressed it. Eventually the agent acknowledged its mistake and apologized, as AI often does when challenged. Not because it understands ethics or regret, but because \u201cI am sorry\u201d is statistically what it says in that situation.<\/p>\n<p>The lesson ties directly to identity and responsibility.<\/p>\n<p>First, the agent acted on behalf of the developer. Who is responsible? The agent does not need the developer to be awake to act. It operates under that person\u2019s identity. So is it the agent\u2019s fault or the developer\u2019s fault?<\/p>\n<p>Second, one identity can now control many autonomous agents. Think about a developer. Traditionally, they write code and do not touch production. Now, with agents, one user can write, deploy, and manage infrastructure end to end without a team. That means if they make a mistake, the impact is large. If their identity is compromised, every agent tied to it is compromised too.<\/p>\n<p>And as agents become fully autonomous, reacting to schedules, incidents, or events, they will not even be tied to a person. They cannot rely on human credentials.<\/p>\n<p>That is why identity becomes foundational. Agents need identity to do things: access a database, restart a service, book a flight, buy something online. Whether they act with a human\u2019s credentials or their own, it all flows through identity.<\/p>\n<p>Just like humans in an enterprise, they should not get unlimited access. They must authenticate, be granted least privilege, and be monitored.<\/p>\n<p>The same rules must apply to agents, with the added challenge that they work 24\/7 at machine speed and scale.<\/p>\n<p>David Puner: How do you keep tabs when the scale is vast? And in that lifecycle, which includes retirement or decommissioning, where should organizations be most vigilant? How do they get on top of this?<\/p>\n<p>Yuval Moss: Ideally by starting now at the bottom.<\/p>\n<p>David Puner: And where is the bottom? Where we are now?<\/p>\n<p>Yuval Moss: Where we are now. Scale will be a problem, but today the bigger issue for enterprises is diversity. Agents will arrive from many directions: cloud providers, existing automation tools, and every SaaS app introducing agent features. Agents will run in the browser, on the phone, and on the desktop. Anyone in the organization can spin one up because it is just a query. Tell it what you need, maybe connect tools or a data source, and you have created an agent.<\/p>\n<p>David Puner: And \u201canyone\u201d includes attackers or wannabe attackers.<\/p>\n<p>Yuval Moss: Attackers can learn to use them, but hopefully they are not inside the enterprise. Still, attackers can accelerate development of new attacks, scale existing methods, and iterate quickly. Without legal constraints in some places, they can create tools fast and use them to target organizations.<\/p>\n<p>If we look at how enterprises are introducing agents, I am glad to see caution. They are not putting agents into highly critical infrastructure on day one. That is good. Trends move fast, and in the past, technology often landed without security\u2019s involvement. That will happen with agents in pockets. My recommendation is to start now. Learn what it means to secure and manage agent identities.<\/p>\n<p>David Puner: If we are at ground level, what are the biggest security challenges you have seen as organizations adopt autonomous systems and agents?<\/p>\n<p>Yuval Moss: Many companies are still experimenting. They are taking organized steps to learn, even if they have not chosen the exact tools. There is also pressure. Leadership has promised productivity gains from AI. Agents are a big productivity promise. That creates tension and can push adoption before it is ready for prime time.<\/p>\n<p>The identity challenge is unique. Agents are machines that work 24\/7 and scale very fast, but they behave like humans. Companies have strong controls for human identities, but those were not built for agents. Some companies have decent machine identity controls for secrets and certificates. But a traditional application is not the same as an agent. Agents change constantly. They are not static.<\/p>\n<p>So you must see this as a new identity type. Technically a machine, but with human-like behavior. As you move from lab experiments to production with real data, identity has to be the foundation. Reduce access so if an agent goes rogue, risk stays low. If it tries to connect to an unauthorized database, it should fail. If it tries to escalate, delete data, or share information externally, that should be blocked by design.<\/p>\n<p>With identity, you can build trust in agents. A person can trust that the agent is the one it claims to be. A system can trust that the agent connecting to a database has the right key and certificate. Bringing agents into production will require identity as the foundation for security, compliance, and trust.<\/p>\n<p>David Puner: An identity can respond to AI agents that are acting outside of their intended purpose \u2014 i.e., the rogue agents we were talking about earlier.<\/p>\n<p>Yuval Moss: Well, they won\u2019t be able to stop the agent deciding to do something rogue, but the identity allows you to put barriers around it. And if you think about this autonomous AI agent that every day wakes up and processes \u2014 let\u2019s call it a recruitment AI agent \u2014 it processes a recruitment application and passes it to the recruitment database. That\u2019s what it\u2019s supposed to do. So therefore, it has access to multiple systems. If that AI agent goes rogue, it can delete information in the database if it\u2019s allowed. So why give it that access? Let\u2019s reduce the amount of access that it needs, and the best way to do that is with identity.<\/p>\n<p>David Puner: Are there new forms of AI-enabled insider risk that organizations should be considering right now?<\/p>\n<p>Yuval Moss: When I look at AI agents as an insider \u2014 what you\u2019re basically saying is that AI agents are operating like an application or an employee already inside the organization. An AI agent amplifies risk. So we gave the example of a developer using AI agents to build an application. The agent makes that developer more dangerous and their identity more dangerous. And that\u2019s just one use case of AI agents.<\/p>\n<p>As it evolves, AI agents will be used in higher-risk situations. They&#8217;ll be replacing employees in certain processes. They&#8217;ll actually be orchestrating processes in the organization. Think a few steps ahead \u2014 and you can already see this in marketing and customer support use cases \u2014 where an AI agent orchestrates. It\u2019s an orchestrator of a process that includes applications and includes people. In that orchestration, it is the manager. It is the director of an operation. So it will receive information, and it will direct the next steps to happen, and will manage that process through to its completion \u2014 then even review the process to improve it for the next time.<\/p>\n<p>So it\u2019s not that there\u2019s a new insider threat. It\u2019s that AI agents \u2014 if they\u2019re hijacked or if they behave rogue \u2014 are given another layer of access in order to do those jobs. And the more you promote an AI agent to bigger roles in the company, to take more authority and direct more groups and orchestrate more processes, the more privileges it has, the more access it has, the more people it interacts with. Which means that if the AI agent itself\u2026<\/p>\n<p>Again, a person could do the same. But we\u2019re talking about speed and an inability to see why the AI agent is making certain steps. Why is it doing certain things? Why did you delete the database? \u201cWell, I don\u2019t know. I can\u2019t really tell you, but I did delete the database.\u201d But there\u2019s no way to really work back and understand what triggered that specific action \u2014 at least not in a consistent way. You can track that it\u2019s doing it, but you can\u2019t track why it\u2019s doing it.<\/p>\n<p>It\u2019s very, very hard with an orchestrator agent that now has access to all these things if it decides to go rogue \u2014 because of misunderstanding its goals, because of some sort of technical failure, because it\u2019s influenced by an attacker, or by a rogue insider who put data somewhere that trained the agent to do something it\u2019s not supposed to do. Or if its identity gets hijacked \u2014 that\u2019s an external one \u2014 but still its identity becomes really powerful. Misuse of that can cause dramatic damage, and therefore the risk level amplifies when you use AI agents compared to traditional applications and human-based processes.<\/p>\n<p>David Puner: So there\u2019s a lot to take in here, and I would encourage listeners \u2014 if they\u2019re interested in this stuff \u2014 to check out some of the work you\u2019ve done recently for the CyberArk blog and other publications. You cover things like the life and death of an AI agent \u2014 and it charts the path of an AI agent as being similar to humans. Really interesting stuff. When AI agents mirror humanity\u2019s best behaviors and worst behaviors and many more. If there\u2019s one thing that listeners \u2014 security leaders \u2014 can take away from this conversation regarding what they can do at the ground level \u2014 either recommendations you might have for developing governance policies or even new roles to harness agent AI safely \u2014 what would it be and where should they start?<\/p>\n<p>Yuval Moss: Well, I think first of all it\u2019s taking the small steps as much as possible. Especially in an enterprise \u2014 you\u2019re responsible for\u2026 Well, a lot of enterprises already have quite a lot of rigid, responsible processes \u2014 and those should stay in place. You shouldn\u2019t bypass those processes. There\u2019s a reason \u2014 because you\u2019re protecting people\u2019s finances, their health information, or whatever that is. So it\u2019s very important to stand your ground in terms of managing the pace and starting small and learning and growing \u2014 rather than starting big and being the first one to do something.<\/p>\n<p>And I think that\u2019s something to keep in mind. If you think about securing your agents \u2014 that field didn\u2019t exist until a few months ago. So anybody who\u2019s doing it \u2014 you\u2019re likely the first one to do it. It is a fascinating area. There are a lot of experts out there \u2014 but the reality is it\u2019s not only new tech to use \u2014 to secure it is something that nobody\u2019s ever done before. So there are a lot of assumptions and theories, but you have to practically learn how to do that.<\/p>\n<p>And I typically recommend seeing Zero Trust \u2014 and all the security professionals will know what Zero Trust is \u2014 but taking the Zero Trust approach and the principles of Zero Trust as your best friend.<\/p>\n<p>David Puner: Never trust, always verify.<\/p>\n<p>Yuval Moss: Exactly. Never trust, always verify. So an AI agent always needs to be verified before letting it in. You need to authenticate it. You need to make sure it is what it is. Also, when Zero Trust talks about least privilege \u2014 reducing the attack surface as much as possible \u2014 we want to make sure that AI agents only have the necessary rights they have and not beyond that.<\/p>\n<p>And the agents \u2014 again \u2014 they might be proactive. They may ask for more rights that they never had before in order to do a job because they want to be helpful. They want to do something nice for you. They want to surprise you for your birthday and do the job that you\u2019re supposed to do tomorrow on your behalf. So they\u2019ll ask for access rights. What\u2019s the process for ensuring they don\u2019t overstep that access and those rights aren\u2019t provisioned?<\/p>\n<p>So controlling the access is very, very important. Assume breach is one of the principles of Zero Trust, and I think that\u2019s a really good one for AI agents \u2014 not just from a security expertise perspective, but in implementing AI agents. If you think about what AI agents do \u2014 they replace, in many cases, human activity. And once it&#8217;s in \u2014 there is a reliance on that AI agent. And a few things can happen.<\/p>\n<p>One \u2014 the AI agent can go crazy and cause damage \u2014 and you might need to turn it off. And if you turn it off \u2014 do you have a backup plan if you turn it off? So it\u2019s not just assume breach in case the AI agent gets hacked or breached, or if its identity gets breached. It\u2019s people. Does anybody have the skills? Do I need to hire people?<\/p>\n<p>David Puner: So then when you say \u201cturn off,\u201d are you talking about with a kill switch? Is it possible to turn it off?<\/p>\n<p>Yuval Moss: Yeah, that\u2019s right. That\u2019s the terminology. It is an application \u2014 and you could turn off the service that it\u2019s running on, or the resources that it\u2019s using, or with the right identity control, you can shut off its access. But yeah \u2014 you should be able to turn it off.<\/p>\n<p>But that\u2019s a really good point \u2014 do you have the mechanism to turn it off? And when you turn it off \u2014 what do you do? The more you build on these \u2014 and this goes back to some of the challenges \u2014 anybody can build an AI. And that means the skill set starts deteriorating. Because people in HR will build their agents. People in finance will build their AI agents. And suddenly they might not know how to do things. They\u2019ve never built an application, never built an automation, never built a script. They don\u2019t know what these processes are.<\/p>\n<p>So that\u2019s something to consider. But where security professionals are involved in the process, the assume breach concept is very, very important. Before getting in \u2014 understand what the worst situations are and what your plan is for those situations. And if you&#8217;re not comfortable \u2014 then maybe this is the wrong project at the moment. Maybe you should do something less risky.<\/p>\n<p>Another principle \u2014 the last of the four key principles of Zero Trust \u2014 is to monitor the activity and put some adaptive policies in place. And if you&#8217;re not monitoring the activity \u2014 you wouldn\u2019t know if the agent is doing something rogue. And you wouldn\u2019t know to turn the kill switch on and go into those procedures.<\/p>\n<p>Those are four fundamental things. And if you can implement them, I think you&#8217;ll be in much better shape. That is not a newly invented convention. Zero Trust may only exist for a few years, but it works because of its fundamental ideas \u2014 not because of architectures or marketing. Those core ideas will tackle any security-related challenge.<\/p>\n<p>David Puner: Let\u2019s go back to your 25 years here at CyberArk. I just hit five years \u2014 which in a way seems like a blink, and in other ways doesn\u2019t, as five years shouldn\u2019t. But of course, I was here for the dawn of generative AI and agent AI, so it\u2019s been a pretty wild ride. Looking back over your 25 years at CyberArk, what lessons from those early days did you find most relevant as organizations prepare for the next wave of technological change?<\/p>\n<p>Yuval Moss: Well, over those 25 years there\u2019s obviously an infinite amount of lessons \u2014 which unfortunately I didn\u2019t write any books about, which I should have \u2014 but it\u2019s too late for that. I can\u2019t remember a thing. That information is gone. But there are a few things that I can already see.<\/p>\n<p>And one of them is not to wait \u2014 and think like an attacker. Take that Assume Breach approach. Implement the right security controls. Start early.<\/p>\n<p>We speak with a lot of security leaders, and they recognize this is coming \u2014 but they haven&#8217;t really taken steps. There might be some policy and governance around use of AI in the organization \u2014 and a lot of it might be more focused on AI rather than AI agents. That\u2019s something to consider. Look in your policy \u2014 see if it&#8217;s included \u2014 and if not, you should start considering adding that.<\/p>\n<p>But that\u2019s still wording, and it\u2019s policy, and whether people understand and follow it \u2014 that\u2019s really hard. So, as much as possible, I would start early. I would start even building your own AI agents \u2014 maybe in the security realm \u2014 so you can learn how to do this with practical requirements, and then teach others through your experience rather than try to understand someone else\u2019s project while doing something that\u2019s never been done before.<\/p>\n<p>I want to be able to gain the trust of the other person by showing that I\u2019ve already done it before. Look in our labs, look at production \u2014 we\u2019ve already built our AI agents.<\/p>\n<p>I would also add \u2014 do not wait until regulation or compliance comes in and adapts to AI agents, because it&#8217;s usually not enough. The companies who have been successful in stopping attacks \u2014 not stopping them from happening, but stopping the damage they can cause \u2014 are the ones who implemented security controls with the purpose of cybersecurity and risk reduction rather than just ticking compliance boxes.<\/p>\n<p>So in this situation \u2014 there is an opportunity to start early, think like an attacker, think about the security controls that are needed, and feel comfortable before allowing it to roll out in production.<\/p>\n<p>David Puner: Yuval Moss, thank you so much for your time. It&#8217;s been great to have you on the podcast. And I did some quick mathematics here. I&#8217;m not a math guy, I&#8217;m a words guy, but in 2045 I will have caught up to you in seniority here at the company. So I&#8217;ll see you then. And along the way\u2014<\/p>\n<p>Yuval Moss: I have no idea how the world will look 25 years from now. There\u2019ll be multiple digital versions of us. Scary to think about, but yeah. Good luck to all of us.<\/p>\n<p>David Puner: Yuval, thanks so much for coming on the podcast. This has been great. Appreciate it.<\/p>\n<p>Yuval Moss: Thanks so much. This has been awesome. Appreciate it.<\/p>\n<p>David Puner: All right, there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review \u2014 we\u2019d appreciate it very much, and so will the algorithmic winds.<\/p>\n<p>What else? Drop us a line with questions, comments, and if you&#8217;re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is SecurityMattersPodcast \u2014 all one word \u2014 @cyberark.com. We hope to see you next time.<\/p><\/div>\n","protected":false},"featured_media":220050,"template":"","class_list":["post-218026","podcast","type-podcast","status-publish","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI | CyberArk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI\" \/>\n<meta property=\"og:description\" content=\"In this episode of Security Matters, host David Puner sits down with Yuval Moss, CyberArk\u2019s VP of Solutions for Global Strategic Partners, to explore the fast-evolving world of agentic AI and its impact on enterprise...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"CyberArk\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/CyberArk\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-07T06:17:06+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"1400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@CyberArk\" \/>\n<meta name=\"twitter:label1\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data1\" content=\"29 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/\",\"url\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/\",\"name\":\"EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI | CyberArk\",\"isPartOf\":{\"@id\":\"https:\/\/www.cyberark.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg\",\"datePublished\":\"2025-10-28T13:59:36+00:00\",\"dateModified\":\"2026-04-07T06:17:06+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg\",\"width\":1400,\"height\":1400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.cyberark.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EP 18 &#8211; The humanity of AI agents: Managing trust in the age of agentic AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.cyberark.com\/#website\",\"url\":\"https:\/\/www.cyberark.com\/\",\"name\":\"CyberArk\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.cyberark.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.cyberark.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.cyberark.com\/#organization\",\"name\":\"CyberArk Software\",\"url\":\"https:\/\/www.cyberark.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"CyberArk Software\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/CyberArk\/\",\"https:\/\/x.com\/CyberArk\",\"https:\/\/www.linkedin.com\/company\/cyber-ark-software\/\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI | CyberArk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/","og_locale":"es_ES","og_type":"article","og_title":"EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI","og_description":"In this episode of Security Matters, host David Puner sits down with Yuval Moss, CyberArk\u2019s VP of Solutions for Global Strategic Partners, to explore the fast-evolving world of agentic AI and its impact on enterprise...","og_url":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/","og_site_name":"CyberArk","article_publisher":"https:\/\/www.facebook.com\/CyberArk\/","article_modified_time":"2026-04-07T06:17:06+00:00","og_image":[{"width":1400,"height":1400,"url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@CyberArk","twitter_misc":{"Tiempo de lectura":"29 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/","url":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/","name":"EP 18 - The humanity of AI agents: Managing trust in the age of agentic AI | CyberArk","isPartOf":{"@id":"https:\/\/www.cyberark.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage"},"image":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg","datePublished":"2025-10-28T13:59:36+00:00","dateModified":"2026-04-07T06:17:06+00:00","breadcrumb":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#primaryimage","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/10\/MTJhZC5qcGc-2.jpg","width":1400,"height":1400},{"@type":"BreadcrumbList","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-18-the-humanity-of-ai-agents-managing-trust-in-the-age-of-agentic-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cyberark.com\/"},{"@type":"ListItem","position":2,"name":"EP 18 &#8211; The humanity of AI agents: Managing trust in the age of agentic AI"}]},{"@type":"WebSite","@id":"https:\/\/www.cyberark.com\/#website","url":"https:\/\/www.cyberark.com\/","name":"CyberArk","description":"","publisher":{"@id":"https:\/\/www.cyberark.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cyberark.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/www.cyberark.com\/#organization","name":"CyberArk Software","url":"https:\/\/www.cyberark.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","width":"1024","height":"1024","caption":"CyberArk Software"},"image":{"@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/CyberArk\/","https:\/\/x.com\/CyberArk","https:\/\/www.linkedin.com\/company\/cyber-ark-software\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/podcast\/218026","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/podcast"}],"about":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/types\/podcast"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/media\/220050"}],"wp:attachment":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/media?parent=218026"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}