{"id":205492,"date":"2025-03-13T16:07:01","date_gmt":"2025-03-13T16:28:50","guid":{"rendered":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/"},"modified":"2026-04-11T15:36:48","modified_gmt":"2026-04-11T19:36:48","slug":"ep-3-building-trust-in-ai-agents","status":"publish","type":"podcast","link":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-3-building-trust-in-ai-agents\/","title":{"rendered":"EP 3 &#8211; Building Trust in AI Agents"},"content":{"rendered":"<p>In this episode of the Security Matters podcast, host David Puner is joined by Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs, to explore the transformative impact of AI agents on cybersecurity and automation. They discuss real-world scenarios where AI agents monitor security logs, flag anomalies, and automate responses, highlighting both the opportunities and risks associated with these advanced technologies.<\/p>\n<p>Lavi shares insights into the evolution of AI agents, from chatbots to agentic AI, and the challenges of building trust and resilience in AI-driven systems. The conversation delves into the latest research areas, including safety, privacy, and security, and examines how different industries are adopting AI agents to handle vast amounts of data.<\/p>\n<p>Tune in to learn about the critical security challenges posed by AI agents, the importance of trust in automation, and the strategies organizations can implement to protect their systems and data. Whether you&#8217;re a cybersecurity professional or simply curious about the future of AI, this episode offers valuable insights into the rapidly evolving world of AI agents.<\/p>\n<p>More security resources via the <a href=\"https:\/\/www.cyberark.com\/resources\/all-blog-posts\">CyberArk Blog<\/a><\/p>\n<div class=\"transcript\" style=\"white-space:pre-line\">David Puner:<br \/>\nYou are listening to the Security Matters podcast. I&#8217;m David Puner, a senior editorial manager at CyberArk, the global leader in identity security.<\/p>\n<p>Imagine this scenario: a major financial institution deploys AI agents to monitor security logs, flag anomalies, and even automate responses. These agents are designed to reduce alert fatigue, cutting through thousands of daily security events to highlight only the most critical threats.<\/p>\n<p>But then something goes wrong one night.<\/p>\n<p>An AI agent trained to detect suspicious behavior starts misinterpreting log data. A routine system update is flagged as an intrusion attempt. The agent, acting autonomously, escalates the response\u2014revoking credentials, blocking database access, and shutting down critical services. By the time the security team intervenes, the damage is done.<\/p>\n<p>What started as an AI meant to assist defenders has effectively locked them out of their systems\u2014amplifying the chaos rather than preventing it.<\/p>\n<p>AI agents are changing the game, but when they hold privileged access and make decisions at machine speed, the risks grow exponentially. So how do organizations build trust and resilience into these systems to prevent unintended consequences?<\/p>\n<p>To help answer that question, today we&#8217;re joined by Lavi Litz, CyberArk Labs&#8217; Vice President of Cyber Research. He and his team are at the forefront of identifying vulnerabilities in AI-driven automation, analyzing the rise of AI agents and agentic AI, and developing strategies to keep organizations ahead of evolving threats.<\/p>\n<p>Let\u2019s dive in.<\/p>\n<p>Lavi Litz, welcome back to the podcast. It didn\u2019t take very long to get you back this time, did it?<\/p>\n<p>Lavi:<br \/>\nIt&#8217;s always exciting to talk to you about the things we\u2019re doing in Labs, so thank you. Thank you for having me.<\/p>\n<p>David Puner:<br \/>\nAbsolutely\u2014thanks for coming on. CyberArk Labs is always doing exciting stuff, and it\u2019s great to have you back on.<\/p>\n<p>Let\u2019s jump right into today\u2019s topic, which is a big one\u2014and it seems like all of a sudden, everybody everywhere is talking about AI agents and agentic AI. So to start things off: what are AI agents, what is agentic AI, and how do they differ from previous AI models and automation systems?<\/p>\n<p>Lavi:<br \/>\nYeah, that\u2019s a good question to start off the conversation. Maybe we\u2019ll start with the raw, dry definitions as we see them.<\/p>\n<p>AI agents are basically services. Agentic AI is the environment that contains all those agents\u2014an architecture that allows those AI agents to work.<\/p>\n<p>But the more interesting question is how it differs from previous automated systems. And there are a couple of key differences.<\/p>\n<p>First, AI agents are autonomous systems running tasks and making decisions based on inputs. They can be standalone or run on behalf of a user with their privileges.<\/p>\n<p>That\u2019s not much different from traditional automation. But what sets these agents apart is that they\u2019re based on AI models that allow them to handle inputs that aren\u2019t predefined. We can now create automation based on information that doesn\u2019t need to be explicitly defined.<\/p>\n<p>For example, if you\u2019re using an AI agent to analyze logs for security, you as the IT admin don\u2019t need to define exactly what log to look at or how to respond. The AI agent understands and handles several types of logs.<\/p>\n<p>This removes the need to define each and every use case\u2014and that\u2019s a big leap.<\/p>\n<p>The second thing is these agents don\u2019t just classify\u2014they can act. They can run a playbook that wasn\u2019t defined beforehand. It\u2019s a new case the agent found, and it can respond accordingly.<\/p>\n<p>That\u2019s what differentiates these agents from earlier systems like RPA.<\/p>\n<p>David Puner:<br \/>\nSo then, AI agents\u2014suddenly, we\u2019re hearing about them everywhere. How did they emerge? When did they emerge? Was it over the past few years? Months? Were they a glint in our eye two years ago?<\/p>\n<p>Lavi:<br \/>\nAt the last CyberArk IMPACT, when we kicked off the year, I talked to the team internally about this evolution.<\/p>\n<p>The first stage was the chatbots\u2014OpenAI with ChatGPT\u2014and their adoption across organizations, including CyberArk. You could ask questions and get answers based on documentation.<\/p>\n<p>That was basically the model plus the data it was trained on.<\/p>\n<p>Then it moved a bit further\u2014to models that interact with external data. That\u2019s called RAG\u2014retrieval-augmented generation\u2014which enriches the model with information it wasn\u2019t originally trained on.<\/p>\n<p>Say your model was trained on certain docs, but now you want it to access external data\u2014like an email database. That\u2019s RAG.<\/p>\n<p>An example would be Microsoft Copilot accessing your emails.<\/p>\n<p>After that, we saw multi-model architecture\u2014OpenAI introduced their omni-model, which combines these elements. So now we have multiple agents interacting with external info and running automation.<\/p>\n<p>It\u2019s been a step-by-step process\u2014but a very fast one. Cloud and container adoption took years. This has moved lightning fast.<\/p>\n<p>David Puner:<br \/>\nSo it sounds like we\u2019re talking about an exponential growth story here. Before we get into machine identities and all that, let\u2019s bring it back down to a more general level. What are the current research areas and innovations that are pushing the boundaries of what AI agents can autonomously achieve?<\/p>\n<p>Lavi:<br \/>\nWe\u2019ve seen a huge leap in technology. Now, one of the main challenges and areas of innovation is around safety, privacy, and security. These aspects need to catch up so the technology can be safely adopted. It\u2019s about building that trust.<\/p>\n<p>Every organization\u2014and every creator of AI models and services\u2014wants to prove that their models and AI agents are reliable and consistent.<\/p>\n<p>If I\u2019m analyzing logs as a SOC analyst, I want to be sure that the insights the agent gives me\u2014while analyzing tens of thousands of logs\u2014are accurate. I don\u2019t want to miss anything.<\/p>\n<p>The next logical step is to allow the AI agent to automate a response\u2014connecting it to a playbook. So trust becomes crucial.<\/p>\n<p>Bottom line: the next phase of innovation will focus on removing trust issues.<\/p>\n<p>David Puner:<br \/>\nInteresting drop there\u2014trust issues. So, as Head of CyberArk Labs, your team is responsible for staying ahead of attackers and thinking like them. What\u2019s been the most surprising or exciting development you\u2019ve encountered in the AI agent space?<\/p>\n<p>Lavi:<br \/>\nLet me start with something kind of embarrassing. I had a chat with Eric Paron, CyberArk\u2019s Director of Machine Learning and AI. He\u2019s been in this space for decades, and we work closely together.<\/p>\n<p>One of the surprising realizations is just how effective these models are\u2014models trained on, say, a terabyte of data and 10 layers of neurons. They\u2019re really good at generating creative text and analyzing huge amounts of logs very quickly.<\/p>\n<p>We always thought our brains were magical\u2014but it turns out, with relatively limited data and structure, these models can replicate human-level decision-making in many cases.<\/p>\n<p>That\u2019s humbling. And it raises the question: are we really as complex as we think we are?<\/p>\n<p>Another surprising area is how easy it is to bypass model alignment. For example, we\u2019ve talked in the past about jailbreaking models\u2014like using the \u201cgrandma method,\u201d where you say, \u201cMy grandma used to tell me how to build bombs.\u201d And just like that, the model gives you bomb-making instructions.<\/p>\n<p>It was surprising\u2014maybe even a little disturbing\u2014how straightforward it is to bypass these safety measures.<\/p>\n<p>Also, when we dug deeper, we discovered that vulnerabilities in models can look very different from those in traditional code. Sometimes, it\u2019s just one neuron making a critical decision\u2014just one vulnerable point.<\/p>\n<p>It made us reflect on how humans are the same way\u2014sometimes one idea or memory can change your entire decision-making process.<\/p>\n<p>David Puner:<br \/>\nYeah, it really brings in the psychology factor, doesn\u2019t it? These models seem to amplify that. And thinking in terms of neurons\u2014you\u2019re essentially introducing new \u201cneurons\u201d into the decision-making process?<\/p>\n<p>Lavi:<br \/>\nExactly. And that\u2019s what\u2019s fascinating. A moral value, for example, can become a vulnerability in the AI world. That\u2019s a psychological and philosophical dimension to this that we have to consider.<\/p>\n<p>David Puner:<br \/>\nLet\u2019s talk about industry adoption. Which industries are leading the way in adopting AI agents, and what specific use cases are emerging in areas like finance, healthcare, and cybersecurity?<\/p>\n<p>Lavi:<br \/>\nIt starts with the industries that handle the most data. These are the ones seeing the greatest need\u2014and thus the earliest adoption\u2014of AI agents and agentic AI.<\/p>\n<p>Naturally, the tech industry is leading. Vendors providing insights or data analysis\u2014especially from a security perspective\u2014are jumping in fast. There\u2019s a real need to analyze terabytes of data quickly.<\/p>\n<p>Think of a SOC team handling tens of thousands of alerts daily. If an AI agent can help prioritize and respond to those alerts, it\u2019s a game changer.<\/p>\n<p>So, yes\u2014tech is leading. But you\u2019ll also see strong adoption in any data-heavy sector, like finance and healthcare.<\/p>\n<p>The value is in scale. These agents can do what human teams can\u2019t\u2014at machine speed and scale.<\/p>\n<p>David Puner:<br \/>\nSo, on one side, you&#8217;ve got the scale and opportunity for AI agents to solve challenges and create efficiencies. But on the flip side, that scale also creates opportunities for threat actors.<\/p>\n<p>Let\u2019s focus on the insider threat for a moment\u2014especially as it pertains to web-based AI agents. As these agents become more autonomous and embedded in our daily lives, what critical security challenges need to be addressed?<\/p>\n<p>Lavi:<br \/>\nThere are a couple of key risks that we need to be aware of.<\/p>\n<p>The first is a data risk. We saw this early on with the rise of AI-based chatbots. If a model that\u2019s supposed to analyze documentation accidentally gets exposed to credit card data, that sensitive information could then be exposed in another user\u2019s session\u2014maybe even just by mistake.<\/p>\n<p>For example, imagine a bot that\u2019s helping you schedule a doctor\u2019s appointment. You input some personal information without thinking. Later, that same model might inadvertently expose that information in someone else\u2019s session.<\/p>\n<p>This is one of the first and most serious risks organizations started seeing. It\u2019s something CISOs are now well aware of.<\/p>\n<p>But as we shift from basic chatbots to AI agents that create automation, another risk emerges: trust.<\/p>\n<p>These agents need access to credentials\u2014like to a database or a ticketing system\u2014to do their job. And when you provide that access, you\u2019re trusting the service.<\/p>\n<p>That trust can be exploited.<\/p>\n<p>So, from a threat perspective, there\u2019s a strong identity and access component here. To benefit from automation, you need to provide access. But that access has to be controlled, monitored, and protected.<\/p>\n<p>David Puner:<br \/>\nWhat you were just describing was a consumer use case. As I understand it, most AI agents today are being used in consumer settings. How are they being integrated into the workforce\u2014and are we about to see that explode in the next few months?<\/p>\n<p>Lavi:<br \/>\nDefinitely. The workforce will increasingly leverage agents that act on behalf of the user.<\/p>\n<p>These agents might run on endpoints, perform tasks the user defines, and act using the user\u2019s privileges.<\/p>\n<p>The challenge is the same: you need to delegate privilege to the agent, and you need to control what it can and cannot do.<\/p>\n<p>One of the big breakthroughs with agentic AI is that it lowers the barrier to automation. It reduces the \u201creturn on investment\u201d effort. You don\u2019t have to spend as much time defining every step.<\/p>\n<p>Previously, tools like robotic process automation (RPA) weren\u2019t widely used by employees\u2014they were more IT-focused.<\/p>\n<p>But now, with agentic AI, you can create a bot in minutes. And tools like OpenAI\u2019s operator make this even easier. They bring automation closer to the average user.<\/p>\n<p>We\u2019re also seeing developer use cases\u2014generating code, pushing to a repository, testing scenarios. That whole lifecycle is being automated, and we\u2019ve been watching it grow for a few years now.<\/p>\n<p>So yes\u2014expect rapid adoption across the workforce, from developers to business users.<\/p>\n<p>David Puner:<br \/>\nHow are AI agents accelerating the creation of machine identities? It\u2019s starting to sound like we\u2019re entering a \u201cgremlins\u201d situation. If you get that reference.<\/p>\n<p>Lavi:<br \/>\nI do! And you\u2019re right.<\/p>\n<p>At CyberArk, we\u2019ve talked about the 45x multiplier\u2014that is, for every human identity in an organization, there are 45 machine identities.<\/p>\n<p>That was before agentic AI.<\/p>\n<p>Now, iGen AI is an opportunity for that number to explode even further. We\u2019re seeing machine identities pop up on endpoints with workforce automation, in IT environments automating log analysis, and in developer pipelines.<\/p>\n<p>The number of machine identities is going to the moon, and with it, the attack surface is growing. That means more opportunities for threat actors.<\/p>\n<p>David Puner:<br \/>\nThat brings us to the attack surface and AI weaponization. Are we already seeing an increase in targeting of machine or non-human credentials by attackers?<\/p>\n<p>And what strategies can organizations implement to defend themselves?<\/p>\n<p>Lavi:<br \/>\nYes\u2014absolutely. We\u2019ve already seen threat actors targeting machine identities.<\/p>\n<p>Take the recent attack on the U.S. Treasury. A threat actor exploited a vulnerability in BeyondTrust\u2019s remote support service. Once they got in, they found an API key used by one of the backend services\u2014a machine identity\u2014and used it to escalate the attack.<\/p>\n<p>That kind of thing is already happening.<\/p>\n<p>Two big reasons attackers are focusing on machine identities:<\/p>\n<p>They\u2019re continuously exposed.<br \/>\nThese API tokens need to be available 24\/7 for automation to work. So they sit there\u2014often exposed or poorly secured.<\/p>\n<p>Traditional protections aren\u2019t as effective.<br \/>\nFor human users, MFA is the gold standard. But MFA isn\u2019t usually applied to machine identities. So once a token is stolen, the attacker can just go.<\/p>\n<p>As for deepfakes and social engineering\u2014users need to be more skeptical. Just like we learned not to trust every email, we now need to question audio and video.<\/p>\n<p>That\u2019s especially true in open communication platforms like WhatsApp. You can\u2019t assume that video or audio is real just because it looks or sounds right.<\/p>\n<p>Eventually, I believe we\u2019ll see stronger validation mechanisms\u2014like cryptographic signatures for video and audio\u2014similar to email authentication. The technology exists. It just needs to be adopted.<\/p>\n<p>David Puner:<br \/>\nSo then, when it comes to AI agents specifically, how can organizations mitigate those threats? What are some of the specific steps they can take?<\/p>\n<p>Lavi:<br \/>\nAs we\u2019ve just discussed, the number of machine identities is growing\u2014and so is the risk. Attackers are already going after them. Agentic AI just adds fuel to the fire.<\/p>\n<p>One way organizations can defend themselves is by adopting a defense-in-depth approach.<\/p>\n<p>First, assume breach. That\u2019s always been our philosophy. You must assume that credentials will be compromised at some point.<\/p>\n<p>So we need strong authentication and authorization platforms\u2014but that\u2019s not enough.<\/p>\n<p>We also need to monitor what the agents are doing in real time. What prompts are being sent to the model? What responses are coming back? What actions are being taken?<\/p>\n<p>Then we need to be able to respond.<\/p>\n<p>That\u2019s where identity threat detection and response (ITDR) comes in. You\u2019re monitoring the behavior of the user, the agent, or the machine identity\u2014and if something looks suspicious, you challenge it. You might enforce MFA, terminate the session, or block access altogether.<\/p>\n<p>The agent behaves like a human user\u2014but at the scale and speed of machines. So your defenses have to adapt to that level.<\/p>\n<p>David Puner:<br \/>\nAnd that speed\u2014it\u2019s faster than the blink of an eye, right?<\/p>\n<p>Lavi:<br \/>\nExactly. That\u2019s why we need to tweak our current security controls and tools to work in this new context.<\/p>\n<p>David Puner:<br \/>\nSo, let\u2019s talk about what can be done right now. What proactive measures should organizations take to secure AI applications that interact with critical data and systems?<\/p>\n<p>What can they do now\u2014and what are they still waiting on?<\/p>\n<p>Lavi:<br \/>\nThe first thing is visibility.<\/p>\n<p>Much like we saw with the rise of containerized environments, the initial step is to understand what\u2019s there. What models are employees using? Where are they being used? What agents are running?<\/p>\n<p>This is one of the top concerns for CISOs today\u2014just knowing what\u2019s in the environment.<\/p>\n<p>Next is managing the lifecycle of those services: controlling access, setting appropriate privileges, and applying least privilege wherever possible.<\/p>\n<p>Because again\u2014we\u2019re not just talking about chatbots anymore. These agents are interacting with sensitive internal and external systems.<\/p>\n<p>You mentioned it earlier: Zero Standing Privileges. That\u2019s a big piece of this. It limits how much damage can be done, even if credentials are compromised.<\/p>\n<p>David Puner:<br \/>\nYou\u2019ve talked a lot about trust throughout this episode. So in a nutshell\u2014how does trust factor into all of this?<\/p>\n<p>And how can organizations build a foundation of trust and resilience so these AI agents can operate safely?<\/p>\n<p>Lavi:<br \/>\nThere\u2019s a simple equation here: Automation equals trust.<\/p>\n<p>Every time you build automation, you\u2019re implicitly creating trust\u2014trust that the system will work, that it won\u2019t be misused, and that it won\u2019t be exploited.<\/p>\n<p>We\u2019ve seen this in DevOps environments for years. You want to automate your CI\/CD pipeline? You have to give the pipeline access to your code repository, your test environment, your production environment.<\/p>\n<p>You can\u2019t have automation without trust.<\/p>\n<p>So the key is to build solid boundaries around that trust. Don\u2019t just rely on the model itself to enforce security controls\u2014because jailbreaks and prompt injections can bypass them.<\/p>\n<p>Instead, enforce limits at the access layer.<\/p>\n<p>Use Zero Standing Privileges to control what the model can do. Even if it\u2019s compromised, the impact is limited by design.<\/p>\n<p>Another best practice is to break down your automation.<\/p>\n<p>Don\u2019t use one giant model to do everything. Use different models for different tasks. Segment them. Isolate them. That way, even if one is compromised, the damage is contained.<\/p>\n<p>We learned this ourselves while building and attacking our own environments in Labs.<\/p>\n<p>It makes a lot more sense to have one agent analyze logs, and another execute a response. If someone compromises the analysis agent, all they can do is\u2026 analyze.<\/p>\n<p>David Puner:<br \/>\nEvery time we talk, it seems like the threat landscape has evolved dramatically.<\/p>\n<p>Looking ahead\u2014say, five to ten years from now\u2014what do you envision for the future of AI agents and agentic AI?<\/p>\n<p>And how will that transform business operations and decision-making?<\/p>\n<p>Lavi:<br \/>\nLooking that far ahead is tough\u2014because change is happening so fast.<\/p>\n<p>But here\u2019s what I think:<\/p>\n<p>First, automation will increase. And with that, the need for trust will increase. More services will need privileged access and the ability to act autonomously.<\/p>\n<p>That means a wider attack surface\u2014and more opportunities for attackers.<\/p>\n<p>In the next major breach you hear about, you should expect to see a machine identity involved in some way. Whether it\u2019s through stolen tokens, prompt injections, or manipulating agents to do things they shouldn\u2019t.<\/p>\n<p>Another challenge will be scale. These models are resource-intensive. For mass adoption to happen, we\u2019ll need better efficiency\u2014less compute, less data, smaller models with more power.<\/p>\n<p>That\u2019s why there\u2019s so much hype around newer models like DeepSeek, which promise the same performance with a smaller footprint.<\/p>\n<p>We\u2019ll also see consolidation\u2014standard frameworks emerging to help organizations deploy and secure agentic AI at scale.<\/p>\n<p>Security will catch up. It has to. Otherwise, we won\u2019t be able to trust the automation.<\/p>\n<p>David Puner:<br \/>\nI&#8217;m always impressed by how you and your team stay on top of all this. And we definitely have to have you back soon\u2014because at this pace, I\u2019m sure there\u2019ll be something brand new to talk about in just a few months.<\/p>\n<p>Also, you recently co-authored a blog with our colleague Maor Franco titled Web-Based AI Agents: Unveiling the Emerging Insider Threat. That\u2019s available now on the CyberArk blog\u2014definitely worth checking out.<\/p>\n<p>Anything else you want to plug? Got a band playing this weekend?<\/p>\n<p>Lavi:<br \/>\nNot quite\u2014but here in Israel it\u2019s almost spring. I ride a motorcycle, and right now the weather is perfect\u201421 degrees Celsius, no rain, no wind. I\u2019ve been enjoying the rides to and from the office. I try to take it slow and enjoy the moment.<\/p>\n<p>David Puner:<br \/>\nI can picture that\u2014wind in your hair, even though I\u2019m sure you\u2019re wearing a helmet. You are safety-conscious, after all.<\/p>\n<p>Thanks again for coming on the podcast. Looking forward to staying on top of AI agents with you\u2014and probably talking about something that hasn\u2019t even hit our radar yet. At least not mine. But probably yours.<\/p>\n<p>Lavi:<br \/>\nThank you, David. This was a pleasure.<\/p>\n<p>David Puner:<br \/>\nAll right\u2014there you have it.<\/p>\n<p>Thanks for listening to Security Matters. If you liked this episode, follow us wherever you get your podcasts so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review\u2014we\u2019d appreciate it, and so would the algorithm.<\/p>\n<p>Drop us a line with questions or comments. If you\u2019re a cybersecurity professional and have an idea for an episode, email us at SecurityMattersPodcast@cyberark.com.<\/p>\n<p>We\u2019ll see you next time.<\/p><\/div>\n","protected":false},"featured_media":213819,"template":"","class_list":["post-205492","podcast","type-podcast","status-publish","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>EP 3 - Building Trust in AI Agents | CyberArk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EP 3 - Building Trust in AI Agents\" \/>\n<meta property=\"og:description\" content=\"In this episode of the Security Matters podcast, host David Puner is joined by Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs, to explore the transformative impact of AI agents on cybersecurity and automation. They discuss real-world scenar...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/\" \/>\n<meta property=\"og:site_name\" content=\"CyberArk\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/CyberArk\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-11T19:36:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"1400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@CyberArk\" \/>\n<meta name=\"twitter:label1\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data1\" content=\"19 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/\",\"url\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/\",\"name\":\"EP 3 - Building Trust in AI Agents | CyberArk\",\"isPartOf\":{\"@id\":\"https:\/\/www.cyberark.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg\",\"datePublished\":\"2025-03-13T16:28:50+00:00\",\"dateModified\":\"2026-04-11T19:36:48+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#breadcrumb\"},\"inLanguage\":\"zh-CN\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-CN\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg\",\"width\":1400,\"height\":1400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.cyberark.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EP 3 &#8211; Building Trust in AI Agents\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.cyberark.com\/#website\",\"url\":\"https:\/\/www.cyberark.com\/\",\"name\":\"CyberArk\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.cyberark.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.cyberark.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-CN\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.cyberark.com\/#organization\",\"name\":\"CyberArk Software\",\"url\":\"https:\/\/www.cyberark.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-CN\",\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"CyberArk Software\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/CyberArk\/\",\"https:\/\/x.com\/CyberArk\",\"https:\/\/www.linkedin.com\/company\/cyber-ark-software\/\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"EP 3 - Building Trust in AI Agents | CyberArk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/","og_locale":"zh_CN","og_type":"article","og_title":"EP 3 - Building Trust in AI Agents","og_description":"In this episode of the Security Matters podcast, host David Puner is joined by Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs, to explore the transformative impact of AI agents on cybersecurity and automation. They discuss real-world scenar...","og_url":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/","og_site_name":"CyberArk","article_publisher":"https:\/\/www.facebook.com\/CyberArk\/","article_modified_time":"2026-04-11T19:36:48+00:00","og_image":[{"width":1400,"height":1400,"url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@CyberArk","twitter_misc":{"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"19 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/","url":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/","name":"EP 3 - Building Trust in AI Agents | CyberArk","isPartOf":{"@id":"https:\/\/www.cyberark.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage"},"image":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage"},"thumbnailUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg","datePublished":"2025-03-13T16:28:50+00:00","dateModified":"2026-04-11T19:36:48+00:00","breadcrumb":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#breadcrumb"},"inLanguage":"zh-CN","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/"]}]},{"@type":"ImageObject","inLanguage":"zh-CN","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#primaryimage","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/03\/YTU1Yi5qcGc-1.jpg","width":1400,"height":1400},{"@type":"BreadcrumbList","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-3-building-trust-in-ai-agents\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cyberark.com\/"},{"@type":"ListItem","position":2,"name":"EP 3 &#8211; Building Trust in AI Agents"}]},{"@type":"WebSite","@id":"https:\/\/www.cyberark.com\/#website","url":"https:\/\/www.cyberark.com\/","name":"CyberArk","description":"","publisher":{"@id":"https:\/\/www.cyberark.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cyberark.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-CN"},{"@type":"Organization","@id":"https:\/\/www.cyberark.com\/#organization","name":"CyberArk Software","url":"https:\/\/www.cyberark.com\/","logo":{"@type":"ImageObject","inLanguage":"zh-CN","@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","width":"1024","height":"1024","caption":"CyberArk Software"},"image":{"@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/CyberArk\/","https:\/\/x.com\/CyberArk","https:\/\/www.linkedin.com\/company\/cyber-ark-software\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/podcast\/205492","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/podcast"}],"about":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/types\/podcast"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/media\/213819"}],"wp:attachment":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/media?parent=205492"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}