{"id":222176,"date":"2026-01-14T14:42:17","date_gmt":"2026-01-14T15:02:06","guid":{"rendered":"https:\/\/www.cyberark.com\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/"},"modified":"2026-05-04T09:56:07","modified_gmt":"2026-05-04T13:56:07","slug":"ep-23-red-teaming-ai-governance-catching-model-risk-early","status":"publish","type":"podcast","link":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/","title":{"rendered":"EP 23 &#8211; Red teaming AI governance: catching model risk early"},"content":{"rendered":"<p>AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agents to make decisions, recommendations, and even commitments far beyond their intended authority.<\/p>\n<p>Isom, former director of AI and technology at the U.S. Department of Energy (DOE) and now founder and CEO of IsAdvice &amp; Consulting, explains why AI red teaming must extend beyond cybersecurity, how to stress test AI governance before something breaks, and why human oversight, escalation paths, and clear limits are essential for responsible AI.<\/p>\n<p>The conversation examines real-world examples of AI drift, unintended or unethical model behavior, data lineage failures, procurement and vendor blind spots, and the rising need for scalable AI governance, AI security, responsible AI practices, and enterprise red teaming as organizations adopt generative AI.<\/p>\n<p>Whether you work in cybersecurity, identity security, AI development, or technology leadership, this episode offers practical insights for managing AI risk and building systems that stay aligned, accountable, and trustworthy.<\/p>\n<div class=\"transcript\" style=\"white-space:pre-line\">David Puner:<br \/>\nYou are listening to the Security Matters podcast. I\u2019m David Puner, a senior editorial manager at CyberArk, the global leader in identity security.<\/p>\n<p>An organization deploys an AI agent to help customers faster. It\u2019s trained. It works. It starts making decisions. But here\u2019s the problem. The agent starts stepping outside its lane without warning. The agent begins making promises. No policy ever approved discounts. It shouldn\u2019t give benefits. It can\u2019t authorize wildly off-base perks like airline tickets.<\/p>\n<p>No breach triggered this. No adversary pushed it. The danger came from inside the model, from the gap between how it was trained and how it behaves in the real world. When no one ever truly tested it, nobody notices. The model keeps operating. The promises keep stacking up, and the real question is, how far does the damage spread before anyone realizes what the agent\u2019s been doing?<\/p>\n<p>After a customer complains? When the books don\u2019t balance? Or only once the damage is done?<\/p>\n<p>That tension between authority, identity, and limits is at the center of today\u2019s conversation with Pamela K. Isom, former director of AI and technology at the US Department of Energy and current founder and CEO of Isom Advice and Consulting.<\/p>\n<p>And those tensions don\u2019t stay theoretical for long. They show up the moment a system behaves unexpectedly. The only way to catch that early is to deliberately push the system past its comfort zone. As Pamela explains, red teaming reaches far beyond cybersecurity. She says it\u2019s about stress testing the governance around AI systems, ensuring models don\u2019t behave as if they have identities, roles, or authority they were never meant to assume, and making sure there are clear escalation paths in place before something breaks.<\/p>\n<p>Let\u2019s get into it with Pamela Isom.<\/p>\n<p>Pamela K. Isom, founder and CEO of IsAdvice &amp; Consulting, and former director of artificial intelligence and technology at the US Department of Energy. Thank you for coming onto the podcast. Welcome.<\/p>\n<p>Pamela Isom:<br \/>\nThank you for having me. I\u2019m honored to be here.<\/p>\n<p>David Puner:<br \/>\nLet\u2019s get right into it. From your early work in the private sector to shaping AI strategy at the Department of Energy and pioneering AI governance initiatives that incorporate adversarial testing, and now as the founder and CEO of your own AI governance consulting firm, what are some milestones that have shaped your approach to AI, cybersecurity, and innovation?<\/p>\n<p>Pamela Isom:<br \/>\nOkay. I am going to try to unpack that some.<\/p>\n<p>One of the earliest milestones in my career, as a young programmer in the field, I learned that I\u2019m a risk taker. And I learned that I like to debug systems. Working for municipalities, I liked to debug systems and find out where those gaps are. I really enjoy taking those gaps and figuring out where the issues are and solving those problems with clients, working with municipalities and digging into operational issues.<\/p>\n<p>I got an opportunity then to move to a larger organization, and that\u2019s where I first learned AI. It was expert systems, and it was called OPS5. Not many people were using the tools or this type of software. And the reason why this is important is because it was my first foray into intelligent systems.<\/p>\n<p>So then I took those skills and moved them with me as I went to work for large companies, IBM and a couple others. I can count the companies that I worked for. I was going from being a risk taker and identifying issues and potential risks and turning those risks into opportunities to help clients excel, to leadership and governance when I went into IBM.<\/p>\n<p>That\u2019s when I really started to dig into the importance of trust, the importance of leadership, and the importance of governance. And of course, I was at IBM when Watson first came around. This was a milestone for me because I spent a lot of time turning around troubled projects, understanding the lifecycle, understanding governance, trying to figure out where communication failures were, and turning those troubled projects around.<\/p>\n<p>Again, when I would see gaps, I would turn those gaps into opportunities and innovations. That\u2019s where I got patents and things like that from doing that type of work. I\u2019m an engineer, and then I transitioned from being an engineer into being a solution architect because I like to look at the lifecycle end to end.<\/p>\n<p>I can recall conversations with people saying to me, \u201cYou need to stay in your swim lane.\u201d And I was like, \u201cI don\u2019t have a swim lane.\u201d I like looking at things from end to end. But being an engineer first and then an architect, you get to look at things from a bigger picture, even though they still want you to stay in your lane.<\/p>\n<p>I was never that type. I was fortunate enough to look for gaps, look for threats, threat surfaces, and decision failures, and build up client satisfaction and client excellence. That led to executive leadership when I went into government.<\/p>\n<p>Now I\u2019m focused on advancing AI security and innovation in the government. I started with the Patent and Trademark Office, brought that into the Department of Energy, where I headed up the AI and technology office and also led AI work and AI strategy for the CIO\u2019s office before becoming director over AI and technology for the agency.<\/p>\n<p>Back to your question, the milestones were an honor to go into government, but even more, it was an honor to take the skills I\u2019d developed as a risk taker and convert those into DevSecOps approaches so the solutions and innovations agencies are developing are secure while also advancing AI.<\/p>\n<p>Particularly at the Department of Energy, I was driving AI at the energy domain level, heading up use case development. I remember leading a wildfire task force. I was handpicked to lead that effort. What I was driving was how we use AI to mitigate fuel loads that are instigating, facilitating, and exacerbating wildfires.<\/p>\n<p>It was a wonderful exercise working across agencies. I remember working with the Department of Defense. Those leadership roles are what I was able to bring. And a key takeaway for me was that it was an honor to go into government, and a milestone was being able to turn risks and challenges into opportunities, and then turn those opportunities into sustainable innovations.<\/p>\n<p>David Puner:<br \/>\nThank you for taking us on that ride. I really appreciate it. So, you were with the Department of Energy through September 2022. That strikes me as interesting because that\u2019s right before this enormous wave of generative AI hit everyone. When you were looking at AI back in 2022, did you have any idea that it would be what it is now, or where it is now?<\/p>\n<p>Pamela Isom:<br \/>\nSo here\u2019s the thing. Visionaries tend to see down the road. I\u2019m a visionary. I\u2019m a person who felt like AI was really going to take off. I felt that way back when I studied OPS5 earlier in my career. But then it died down.<\/p>\n<p>While I was at Energy, I did feel like AI was really going to take off. I spent a lot of time with the national labs, and they were already utilizing AI to a great extent. I did realize that this was going to happen, and I felt like it would happen. What I didn\u2019t expect was that generative AI would emerge the way it did, where there\u2019s such a dependency today and such reliance on generative AI.<\/p>\n<p>I thought it would be more strategically adopted. When generative AI really started to surface and take off, I felt like organizations started adopting it just to keep up with others. And I still see that today. They\u2019re just grabbing it because that\u2019s the thing to do, to keep up.<\/p>\n<p>David Puner:<br \/>\nAnd a quick question before we move on to the main event here. Watson\u2014how much was Watson a precursor to the LLMs that we know today?<\/p>\n<p>Pamela Isom:<br \/>\nIt\u2019s definitely a precursor. Watson has been around for a while, but back then, when I was there, there was a lot of emphasis on cognitive intelligence and big data.<\/p>\n<p>The emphasis was on big data and managing and manipulating data, which is what AI is all about. So I would say that Watson was one of the leaders and pioneers. Everything is full circle.<\/p>\n<p>David Puner:<br \/>\nYeah. Really interesting looking back on that now. Let\u2019s talk about one of your focuses, which is red teaming. Red teaming is often seen as a cybersecurity exercise, but you\u2019ve made the case that it\u2019s something business leaders and practitioners should be looking to do. What does that look like in practice, particularly when it comes to AI?<\/p>\n<p>Pamela Isom:<br \/>\nSo we know that AI is an amplifier of data. That\u2019s basically what it does. Whatever information it has, it\u2019s going to bring that to light. It\u2019s going to take the patterns, reveal the patterns, and we\u2019re going to make decisions based on that, based on the recommendations it makes.<\/p>\n<p>When it comes to red teaming, what I really emphasize is yes, we have emerging threats because of what AI introduces, like more data poisoning, faster iteration of information, misuse, misrepresentation of data, as well as good use cases.<\/p>\n<p>So what I say is for red teaming, we want to red team the governance. I\u2019ll use a customer example. I have a client that created an AI shopper agent. That agent is there to help understand what\u2019s going wrong within the organization, because they have policies that officials pay attention to for a while, and then after 60 or 90 days, they revert back to old habits.<\/p>\n<p>Let\u2019s say in this example\u2014without revealing too much\u2014student registration and enrollment. With red teaming, they created this AI shopper agent that goes out and imitates being a student. The team doesn\u2019t know about it, and they\u2019re just observing how the staff responds.<\/p>\n<p>Are they enrolling students who are eligible? Are they providing discounts when they shouldn\u2019t? What type of communication are they having with students? Are there escalations that should occur?<\/p>\n<p>That\u2019s an example of red teaming. Red teams go out and test the process. They stress test the process. They\u2019re not trying to find issues with people. They\u2019re trying to ensure the process is working.<\/p>\n<p>This is where cybersecurity red teaming is emerging into something broader. It\u2019s really about leadership and decision quality, and we need more of this. You don\u2019t have to use a shopper agent. You could test governance in other ways.<\/p>\n<p>For example, this happens today. You have an AI agent making promises that are not within policy. How did it decide to promise discounts or commit the organization to airline tickets? It was trained, but it was not tested.<\/p>\n<p>Red teaming stress tests how the model performs. And beyond that, if it does behave this way, is there governance in place, and does that governance work? How long does it take the organization to discover that it committed to providing an airline ticket to a customer?<\/p>\n<p>Should they find out after the customer complains? Or after the ticket is submitted and the financials don\u2019t balance?<\/p>\n<p>David Puner:<br \/>\nSo in a case like that, how are you coming up with all the different scenarios that you\u2019re plugging into stress tests? Are the scenarios themselves generated via AI?<\/p>\n<p>Pamela Isom:<br \/>\nSo like the shopper agent, that\u2019s AI helping with stress testing. Organizations should work together to decide what red team scenarios should be.<\/p>\n<p>In the examples I used, it wasn\u2019t a cybersecurity person doing it. It was representatives from across the organization. These scenarios need to be created holistically and be cross-representative.<\/p>\n<p>Teams should come together\u2014stakeholders at different levels\u2014to decide how to stress test solutions and to think about the risks if they don\u2019t. What\u2019s the risk of this model if we don\u2019t test it?<\/p>\n<p>Stop leaving it only to cybersecurity teams. Some of this doesn\u2019t sound like a cybersecurity issue, but it can become one. Organizations need to decide what tests make the most sense and where red teaming is most applicable. That\u2019s where experts like me come in.<\/p>\n<p>David Puner:<br \/>\nIt\u2019s interesting that earlier you mentioned being a risk taker, and now you\u2019re focused on finding risk and preventing risk. How does that connect?<\/p>\n<p>Pamela Isom:<br \/>\nBecause you can be a risk taker. I\u2019m a small business owner, and I definitely took a risk. I said, \u201cI want to do my own thing. I want to step out here. I can do this.\u201d You can be a risk taker and still be a champion for risk mitigation.<\/p>\n<p>Risk mitigation means that you appreciate that clients are risk takers. You support that. You believe people should take risks. But you also have to be stewards of mitigating the adverse impacts of those risks. You have to understand the adverse implications, the consequences, and how to mitigate the impact so it doesn\u2019t have adversarial outcomes for your organization.<\/p>\n<p>If we don\u2019t have risk takers, we go nowhere in society.<\/p>\n<p>David Puner:<br \/>\nReally interesting. So then, back to red teaming. Where do you see the biggest missed opportunities for red teaming in enterprise environments, especially outside traditional security teams?<\/p>\n<p>Pamela Isom:<br \/>\nI think aside from the fact that organizations tend to leave it to cybersecurity experts, they need to look at red teaming as something that can happen by anyone within an organization.<\/p>\n<p>It should be embraced. And we don\u2019t embrace it today. I think that\u2019s one of the missteps. We should see red teaming as a way to protect the organization. It\u2019s a form of assurance, and I don\u2019t think we really see it that way.<\/p>\n<p>I also think we confuse red teaming with auditing. Auditing is about compliance. You have documented procedures, and you\u2019re checking whether you\u2019re meeting those procedures. Red teaming is about understanding. It\u2019s about mitigating risk. It\u2019s about trying to find flaws before someone else finds them.<\/p>\n<p>They\u2019re complementary, but they\u2019re two distinct things. There\u2019s a lot of confusion between auditing and red teaming.<\/p>\n<p>David Puner:<br \/>\nLet\u2019s go back to AI for a moment and talk about AI ethics. Ensuring AI systems are fair, transparent, and accountable. You\u2019ve served as a responsible AI official for the Department of Energy, and you\u2019re a certified AI auditor. What does ethical AI look like in the real world beyond frameworks and principles?<\/p>\n<p>Pamela Isom:<br \/>\nThink about every organization having ethical practices and policies already in place. Start there, and then think about how you ensure AI is not violating those moral policies.<\/p>\n<p>For example, let\u2019s say a model is trained to recognize emotional patterns, such as when someone is excited, anxious, or nervous. Maybe they make faster decisions when they\u2019re nervous.<\/p>\n<p>Now imagine an AI agent that helps support decision-making around a product. A client might express concern that if they don\u2019t decide quickly, they\u2019ll get left behind. The model then starts communicating urgency, saying things like, \u201cWe need to decide within the next 24 hours,\u201d or even sooner.<\/p>\n<p>There\u2019s no rule that says it should do that. That\u2019s unethical. It\u2019s a different way of looking at ethics. You wouldn\u2019t tell leaders in your organization to behave that way, but the model has picked up patterns. It has learned that decisions get made faster when it creates urgency.<\/p>\n<p>So the model starts telling someone they need to decide today. That\u2019s unethical behavior. It\u2019s not governed. It shouldn\u2019t do that. I would red team that. I would test how my models perform in situations like this.<\/p>\n<p>David Puner:<br \/>\nThose are great examples. Is there an example where ethical concerns or bias surfaced in an AI project you were involved in, and how did you and your team respond?<\/p>\n<p>Pamela Isom:<br \/>\nYes. As a small business owner, I\u2019m constantly monitoring systems. And when I was in government, many of the systems we worked with were AI systems.<\/p>\n<p>When an ethical concern surfaced, I wasn\u2019t quick to take it to legal. Not yet. I wanted to examine it first to determine whether it was actually an ethical issue or potentially historical data bias.<\/p>\n<p>We know historical data isn\u2019t always the most applicable data. Traditionally, we use historical data to help models make decisions. So we would look closely at the data sets, discuss them, and then decide what actions were needed to resolve the issue.<\/p>\n<p>I\u2019ll give another related example. Today, we\u2019re working on federated data models. We federate data because not every energy client wants their data exposed to others, even though they trust us with it.<\/p>\n<p>When we create dashboards, especially with aggregated data, we have to anonymize the data and understand the raw data source. Who provided it? We track data lineage as it moves through different tiers and ensure it\u2019s handled properly.<\/p>\n<p>When it reaches the end consumer, it\u2019s not just a security issue. It\u2019s also an ethical issue. We gave our word that we would protect and respect their data. They trusted us with it. If we don\u2019t clean and manage that data properly before it\u2019s aggregated and shared, we\u2019ve violated that trust.<\/p>\n<p>So yes, that\u2019s how we think about ethics in practice.<\/p>\n<p>David Puner:<br \/>\nYes, it does. Thank you. When organizations think about AI governance, they often focus on the technology, but you\u2019ve said the real vulnerabilities are often in the structures around the systems. What kinds of governance blind spots do you see most often?<\/p>\n<p>Pamela Isom:<br \/>\nI see governance blind spots when people assume that if the AI says to do it, then do it. If the model says the information is correct, then the model said it, so let\u2019s just do it. That\u2019s the biggest one I see.<\/p>\n<p>I also see procurement teams using systems without ensuring that vendor claims match reality. That\u2019s a governance issue. There shouldn\u2019t just be a checkpoint. There should be stress testing to ensure that checkpoint actually works.<\/p>\n<p>I\u2019ve been fortunate to do government contracting. The government trusts me with data. We have a contract, and we have insights. Am I going to just release that to models because I\u2019m trying to figure something out? No.<\/p>\n<p>I see enterprises not getting their data in order for AI consumption. They\u2019re handing documents to models and then trusting them blindly. That\u2019s not what these systems were intended for.<\/p>\n<p>David Puner:<br \/>\nAt the Department of Energy, you helped launch the AI Advancement Council and developed an AI risk management playbook. What were some of the most important lessons from that experience, especially for leaders trying to build responsible AI programs today?<\/p>\n<p>Pamela Isom:<br \/>\nThe biggest lesson was not to operate in silos. This issue has been around for years.<\/p>\n<p>The challenge was figuring out how we can operate holistically while still respecting autonomy. That\u2019s why I\u2019m very focused on federated models. They respect autonomy but also allow you to operate holistically and pull information together.<\/p>\n<p>I was very proud to pioneer the AI Advancement Council. It brought together stakeholders from across the department. Department heads were one layer, and program offices or operational teams were another layer, and they needed to come together.<\/p>\n<p>They needed to come together vertically and horizontally. Why? Because teams were making AI investment decisions independently, asking Congress for funding for similar efforts, when there were opportunities to reuse and collaborate. That was the main intent, and I believe it was successful.<\/p>\n<p>The playbook gave me an opportunity to take a risk and help organizations understand AI risks and how to mitigate them. Around the same time, the Department of Commerce was developing its risk management framework, and we coordinated with them.<\/p>\n<p>DOE was able to pioneer an AI risk management playbook that included concrete examples. If you encounter this type of risk, here\u2019s what it looks like. Here\u2019s what you can do. Here are options to mitigate it. Again, being a risk taker while driving risk mitigation.<\/p>\n<p>David Puner:<br \/>\nAnd we keep coming back to that. It really seems like an anchoring mantra for you.<\/p>\n<p>Let\u2019s move over to innovation, national security, and public-private collaboration. You\u2019ve described innovation and national security as deeply connected. How is AI reshaping that relationship?<\/p>\n<p>Pamela Isom:<br \/>\nIt\u2019s not simple. Cybersecurity needs AI capabilities to help pinpoint and mitigate risks associated with modern threats, but AI also introduces new risks.<\/p>\n<p>We need AI to analyze information and understand patterns that humans can\u2019t see with the naked eye. That\u2019s how AI supports national security. It helps strengthen our security posture by enabling us to process big data and distinguish between what\u2019s real and what\u2019s fake.<\/p>\n<p>AI isn\u2019t just associated with fake data. It can also help identify invalid data points. It helps us understand what we\u2019re dealing with and provides recommendations to mitigate risks.<\/p>\n<p>David Puner:<br \/>\nWhat\u2019s the current state of public-private collaboration when it comes to securing AI systems and critical infrastructure? Where are we getting it right, and where do we need to do better?<\/p>\n<p>Pamela Isom:<br \/>\nWe\u2019re getting it right by leaning into data science, machine learning, and large language models. We\u2019re integrating cybersecurity and looking for threats that are amplified by AI.<\/p>\n<p>We\u2019re also using AI and emerging technology to help make communities and cities smarter. But we\u2019re still in an immature stage of AI adoption.<\/p>\n<p>Programs like Genesis are promising. They\u2019re just getting started, but they have a common mission. You see agencies and industry aligning around security, national security, grid resilience, and related goals.<\/p>\n<p>We\u2019re moving in the right direction, but we\u2019re not as mature as we could be. We will get there.<\/p>\n<p>We also need to move past the mindset of taking solutions at their word. We can\u2019t do that. We need to get better at explainability, traceability, and data lineage, and managing that lineage. There will always be opportunities to improve in those areas.<\/p>\n<p>David Puner:<br \/>\nAnd it sounds like at least some of those opportunities are human opportunities.<\/p>\n<p>Pamela Isom:<br \/>\nYes. If an agent is making recommendations, who\u2019s checking them? Who ensures that a recommendation made last week is still applicable this week?<\/p>\n<p>Where is the human? When does the model escalate to a human?<\/p>\n<p>Models are often trained to try to solve everything themselves. We\u2019ve all had experiences calling customer support and wanting to talk to a human. The system keeps trying to solve the problem on its own.<\/p>\n<p>I recently had an experience with a moving company while relocating. The agent was AI. I was trying to describe my workout equipment, and the system didn\u2019t understand. It generated a huge invoice because it thought all the parts were separate, even though they were one set.<\/p>\n<p>It wouldn\u2019t let me talk to a person. I had to call multiple times before reaching a human. They apologized and corrected the mistake, but I told them it shouldn\u2019t have taken that long, and I shouldn\u2019t have received that invoice.<\/p>\n<p>That was because of how the model was trained. It wasn\u2019t tested outside the happy path. That customer experience taught me again that you have to test beyond expected scenarios.<\/p>\n<p>David Puner:<br \/>\nIf you were advising a government agency or a Fortune 500 company today on how to work together more effectively on AI security, where would you start?<\/p>\n<p>Pamela Isom:<br \/>\nIf it\u2019s AI security specifically, I would start by identifying threats that are amplified by AI, such as data poisoning and model injection.<\/p>\n<p>Then I would focus on governance. I would start with those three areas: emerging threats, existing threats that are amplified by AI, and governance. Governance is the hidden one that we often overlook.<\/p>\n<p>David Puner:<br \/>\nWe\u2019re always looking into our crystal ball to a certain degree, but if you\u2019re looking into your crystal ball into 2026 and beyond, if you are thinking about emerging threats and you were sitting down\u2014or you are sitting down\u2014with a CISO or tech leader planning for 2026 and beyond, what\u2019s the one piece of advice you\u2019d want them to walk away with?<\/p>\n<p>Pamela Isom:<br \/>\nI would say don\u2019t over-delegate authority. The AI needs to understand where that authority is and where the delineations are, and make sure that there are clear limits.<\/p>\n<p>Cybersecurity leaders can do that. They can work with the governance folks. I tell them to help the governance teams understand these are the things we should be watching out for.<\/p>\n<p>And then I would say embrace AI. Use it to help with defenses. Use it from an offensive standpoint, which is where we get back to red teaming. Use it more.<\/p>\n<p>David Puner:<br \/>\nAnd that is coming from Pamela Isom, who is not only a risk taker, but a risk mitigator.<\/p>\n<p>Pam, before we go, you also host the podcast AI or Not. Where can our listeners find that? How often is it coming out, and what are your plans for 2026 with it?<\/p>\n<p>Pamela Isom:<br \/>\nThe podcast is about business leaders from around the globe. They get together and talk about how they\u2019re using AI, whether they are going to use it or not.<\/p>\n<p>It\u2019s a podcast to explore the viability of AI to address mission challenges and leading practices. It airs every other Tuesday, and it\u2019s on all podcast platforms.<\/p>\n<p>David Puner:<br \/>\nPamela, thank you so much for coming on to Security Matters. I really appreciate you taking the time, and I hope you have a happy New Year and a great 2026. I think they may be the same thing.<\/p>\n<p>Pamela Isom:<br \/>\nThank you for having me. This has been fun.<\/p>\n<p>David Puner:<br \/>\nAll right, there you have it. Thanks for listening to Security Matters. If you liked this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review. We\u2019d appreciate it very much, and so will the algorithmic winds.<\/p>\n<p>What else? Drop us a line with questions, comments, and if you\u2019re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is securitymatterspodcast@cyberark.com<\/p>\n<p>We hope to see you next time.<\/p><\/div>\n","protected":false},"featured_media":222177,"template":"","class_list":["post-222176","podcast","type-podcast","status-publish","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>EP 23 - Red teaming AI governance: catching model risk early | CyberArk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EP 23 - Red teaming AI governance: catching model risk early\" \/>\n<meta property=\"og:description\" content=\"AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agent...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/\" \/>\n<meta property=\"og:site_name\" content=\"CyberArk\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/CyberArk\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-04T13:56:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"1400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@CyberArk\" \/>\n<meta name=\"twitter:label1\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data1\" content=\"23 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/\",\"url\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/\",\"name\":\"EP 23 - Red teaming AI governance: catching model risk early | CyberArk\",\"isPartOf\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg\",\"datePublished\":\"2026-01-14T15:02:06+00:00\",\"dateModified\":\"2026-05-04T13:56:07+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#breadcrumb\"},\"inLanguage\":\"zh-CN\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-CN\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg\",\"width\":1400,\"height\":1400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.cyberark.com\/zh-hans\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EP 23 &#8211; Red teaming AI governance: catching model risk early\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#website\",\"url\":\"https:\/\/www.cyberark.com\/zh-hans\/\",\"name\":\"CyberArk\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.cyberark.com\/zh-hans\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-CN\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#organization\",\"name\":\"CyberArk Software\",\"url\":\"https:\/\/www.cyberark.com\/zh-hans\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-CN\",\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"CyberArk Software\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/zh-hans\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/CyberArk\/\",\"https:\/\/x.com\/CyberArk\",\"https:\/\/www.linkedin.com\/company\/cyber-ark-software\/\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"EP 23 - Red teaming AI governance: catching model risk early | CyberArk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/","og_locale":"zh_CN","og_type":"article","og_title":"EP 23 - Red teaming AI governance: catching model risk early","og_description":"AI systems are moving fast, sometimes faster than the guardrails meant to contain them. In this episode of Security Matters, host David Puner digs into the hidden risks inside modern AI models with Pamela K. Isom, exploring the governance gaps that allow agent...","og_url":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/","og_site_name":"CyberArk","article_publisher":"https:\/\/www.facebook.com\/CyberArk\/","article_modified_time":"2026-05-04T13:56:07+00:00","og_image":[{"width":1400,"height":1400,"url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@CyberArk","twitter_misc":{"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"23 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/","url":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/","name":"EP 23 - Red teaming AI governance: catching model risk early | CyberArk","isPartOf":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage"},"image":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage"},"thumbnailUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg","datePublished":"2026-01-14T15:02:06+00:00","dateModified":"2026-05-04T13:56:07+00:00","breadcrumb":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#breadcrumb"},"inLanguage":"zh-CN","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/"]}]},{"@type":"ImageObject","inLanguage":"zh-CN","@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#primaryimage","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2026\/01\/ZjQwZi5qcGc.jpg","width":1400,"height":1400},{"@type":"BreadcrumbList","@id":"https:\/\/www.cyberark.com\/zh-hans\/podcasts\/ep-23-red-teaming-ai-governance-catching-model-risk-early\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cyberark.com\/zh-hans\/"},{"@type":"ListItem","position":2,"name":"EP 23 &#8211; Red teaming AI governance: catching model risk early"}]},{"@type":"WebSite","@id":"https:\/\/www.cyberark.com\/zh-hans\/#website","url":"https:\/\/www.cyberark.com\/zh-hans\/","name":"CyberArk","description":"","publisher":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cyberark.com\/zh-hans\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-CN"},{"@type":"Organization","@id":"https:\/\/www.cyberark.com\/zh-hans\/#organization","name":"CyberArk Software","url":"https:\/\/www.cyberark.com\/zh-hans\/","logo":{"@type":"ImageObject","inLanguage":"zh-CN","@id":"https:\/\/www.cyberark.com\/zh-hans\/#\/schema\/logo\/image\/","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","width":"1024","height":"1024","caption":"CyberArk Software"},"image":{"@id":"https:\/\/www.cyberark.com\/zh-hans\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/CyberArk\/","https:\/\/x.com\/CyberArk","https:\/\/www.linkedin.com\/company\/cyber-ark-software\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/podcast\/222176","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/podcast"}],"about":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/types\/podcast"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/media\/222177"}],"wp:attachment":[{"href":"https:\/\/www.cyberark.com\/zh-hans\/wp-json\/wp\/v2\/media?parent=222176"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}