{"id":217174,"date":"2025-09-25T17:18:24","date_gmt":"2025-09-25T17:39:53","guid":{"rendered":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/"},"modified":"2026-04-07T06:55:22","modified_gmt":"2026-04-07T10:55:22","slug":"ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains","status":"publish","type":"podcast","link":"https:\/\/www.cyberark.com\/fr\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/","title":{"rendered":"EP 16 &#8211; Sensing the signals: The hidden risks in digital supply chains"},"content":{"rendered":"<p>Modern digital supply chains are increasingly complex and vulnerable. In this episode of Security Matters, host David Puner is joined by <a href=\"https:\/\/mitmgmtfaculty.mit.edu\/retsef\/\">Retsef Levi<\/a>, professor of operations management at the MIT Sloan School of Management, to explore how organizations can \u201csense the signals\u201d of hidden risks lurking within their software supply chains, from open source dependencies to third-party integrations and AI-driven automation.<\/p>\n<p>Professor Levi, a leading expert in cyber resilience and complex systems, explains why traditional prevention isn\u2019t enough and how attackers exploit unseen pathways to infiltrate even the most secure enterprises. The conversation covers the critical need for transparency, continuous monitoring, and rapid detection and recovery in an era where software is built from countless unknown components.<\/p>\n<p>Key topics include:<\/p>\n<ul>\n<li>How to sense early warning signs of supply chain attacks<\/li>\n<li>The role of AI and automation in both risk and defense<\/li>\n<li>Best practices for mapping and securing your digital ecosystem<\/li>\n<li>Why resilience\u2014not just prevention\u2014must be at the core of your security strategy<\/li>\n<\/ul>\n<p>Whether you\u2019re a CISO, IT leader or security practitioner, this episode will help you rethink your approach to digital supply chain risk and prepare your organization for what\u2019s next.<\/p>\n<p><a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/security-matters\/id1620186412\">Subscribe to Security Matters<\/a> for expert insights on identity security, cyber resilience and the evolving threat landscape.<\/p>\n<div class=\"transcript\" style=\"white-space:pre-line\">David Puner: You are listening to the Security Matters podcast. I&rsquo;m David Puner, senior editorial manager at CyberArk, the global leader in identity security.<\/p>\n<p>Imagine buying a car with no make or model. You wouldn&rsquo;t know what&rsquo;s under the hood, who built it, or whether it&rsquo;s safe to drive. That&rsquo;s how most enterprises run their digital supply chains today\u2014critical software made of unseen modules built by unknown contributors with vulnerabilities you can&rsquo;t trace.<\/p>\n<p>Resilience begins when the unexpected happens, and the only way to catch that shift is by sensing it before it&rsquo;s too late. Prevention matters, but attackers always have the first move. Defenders must detect fast and recover faster. As AI and AI agents enter the mix, identity more than ever cannot be a one-time login.<\/p>\n<p>It&rsquo;s about continuous verification\u2014behavior versus intent\u2014across people, services, APIs, bots, and a growing landscape of AI-driven tools. Today&rsquo;s guest, Retsef Levi, professor at the MIT Sloan School of Management, helped pioneer early cyber capabilities in Israeli military intelligence. Now he researches resilience in complex systems\u2014from hospitals to global supply chains to AI-enabled enterprises.<\/p>\n<p>In our conversation, Retsef unpacks why prevention alone isn&rsquo;t enough, how to weave resilience into the fabric of organizations, and why identity must expand to include machines and AI itself. This is Security Matters. Let&rsquo;s dive in.<\/p>\n<p>David Puner: Retsef Levi, professor of Operations Management at the MIT Sloan School of Management. Welcome to Security Matters. Thanks for coming onto the podcast.<\/p>\n<p>Retsef Levi: Thank you for having me. I look forward to the discussion.<\/p>\n<p>David Puner: Absolutely. I presume this will be the best part of your day, considering that we&rsquo;re recording fairly early for this particular podcast. So let&rsquo;s just jump in. There&rsquo;s a lot of great stuff to cover here. You&rsquo;ve had an unusual path\u2014from pioneering Israel&rsquo;s first cyberattack capabilities as a military intelligence officer to leading research at MIT. How have those experiences shaped the way you think about AI and what it takes to build truly resilient systems?<\/p>\n<p>Retsef Levi: Thank you. To be honest, I cannot take too much credit for planning my career. It came about somewhat by coincidence and a run of events that turned out to be very fortunate for me. I grew up in Israel and, like any 18-year-old, joined the military. At the beginning, I was planning to actually go to Academic Reserve, do a bachelor\u2019s degree in mathematics and physics, and then decided that at the age of 18 I wasn\u2019t ready to commit.<\/p>\n<p>David Puner: Mm-hmm.<\/p>\n<p>Retsef Levi: So I was sent back to the military and was being triaged. Ironically, one of the people that triaged me told me at some point, \u201cHey, you are not good enough in mathematics, so we are going to send you to some course in Arabic,\u201d because I actually studied Arabic in school. And I was like, what do you mean I\u2019m not good at math? I think I\u2019m pretty good at math. That sounded at the time like a horrible statement, but it turned out that it was a course that ultimately led me to join a unit that was focused on special operations within the intelligence forces in the Israeli army.<\/p>\n<p>At some point, I was tasked with some kind of startup within the organization to think about what new technologies were emerging and what opportunities they gave rise to. I was fortunate to be part of the pioneer team that developed some cyberattack capabilities. Then, at some point, the military sends you to go to university. So I was sent to study at Tel Aviv University and initially decided, okay, I\u2019m going to do mathematics because I had to show this guy.<\/p>\n<p>David Puner: Back to the math.<\/p>\n<p>Retsef Levi: That I\u2019m actually pretty good in mathematics, and I\u2019m going to do psychology together with that. I wanted to study the brain and had some thoughts about that. It was funny\u2014there were some problems with the schedule of the two concentrations, mathematics and psychology, and some bureaucracy didn\u2019t allow me any flexibility. So at some point I said, for the heck of it, I\u2019m just going to do mathematics. I had to choose a trend, and I opened the book and looked. I said, \u201cOh, theoretical math\u2014I\u2019m not good enough. I\u2019m not smart enough. Computer science\u2014I don\u2019t like coding.\u201d Then I saw the term operations research, and I was like, wow. I started reading. I didn\u2019t know what it was, and it looked interesting because it was about decisions and modeling systems, and from the description, it sounded very interesting. I just, by instantaneous intuition, said, okay, I\u2019m going to take that. That sounds interesting, knowing nothing about it.<\/p>\n<p>A year later, the military called me to go back in the middle of my studies. So after one year of my bachelor\u2019s degree, between 26 and 27, I stopped for two years, went back to do a role in the military, and then came back between 29 and 31 to finish my bachelor\u2019s degree.<\/p>\n<p>At that point, I had a few options in front of me. I was offered a promotion to lieutenant colonel, but I decided that the roles offered to me were not interesting enough. I already had two kids at that time, and I decided\u2014with the encouragement of my wife\u2014to take a bet and go do a PhD.<\/p>\n<p>At the age of 31, I came to Cornell, did a PhD in operations research, and somehow ended up at MIT. I started in 2002. The PhD finished in 2005, and by 2006 I was at MIT as a faculty member, and I\u2019ve been there since then. It\u2019s probably something I would never have imagined, to be honest.<\/p>\n<p>Even throughout my career, the time in the military\u2014or maybe outside academia more generally\u2014really shaped the way I think about research in several ways. The first is that I try to motivate my research with things that are really practical\u2014things where you can see how academic research touches actual decision-making or actual complex systems that operate under a lot of uncertainty.<\/p>\n<p>Because if you think about most business systems and other systems in our life, they need to manage a lot of uncertainty. That\u2019s always something that attracted me as an academic because I was also dealing with that as a practitioner when I was in intelligence. So that attracted me to think about very complex systems in different areas.<\/p>\n<p>I\u2019ve been working in the area of healthcare\u2014a lot of work with healthcare systems, manufacturing biologic drugs, food and agriculture supply chains, water systems more recently, and retail logistics. Essentially, I can get interested in any complex system that operates under a lot of uncertainty and with great consideration to risk. I think that tendency probably originates from my background in the military and the way I think about uncertainty. It\u2019s really through the notion of risk and intelligence, and I think there\u2019s a lot of overlap between my academic work and my experience as an intelligence officer.<\/p>\n<p>Now, AI to some extent seems like a new trend, but one way to look at it is as a new set of technologies that allow us to design work, make decisions, and design business and organizational systems that are just more intelligent. But many of the dynamics, including issues related to resilience, are not going to be fundamentally different. They\u2019re going to be potentially new aspects, maybe important ones, but it\u2019s not an entirely new set of problems. There are close connections between things I was thinking about before AI emerged as the next buzzword and how I think about it now in the context of AI.<\/p>\n<p>David Puner: Really interesting\u2014this thread about your curiosity in uncertainty and complexity and how it\u2019s taken you down this path. AI and AI-enabled systems\u2014do you think it enables uncertainty, or is it a way to cut through uncertainty, if it\u2019s even on that thread?<\/p>\n<p>Retsef Levi: I think the answer is both, and let me give a few examples that maybe illustrate why it\u2019s both. In my mind, one of the things that is very important to managing risk is sensing.<\/p>\n<p>If you think about a complex system and the human operators or decision-makers who have to make decisions within that system, and you analyze big mistakes\u2014major failures of systems\u2014more often than not, they stem from the fact that the operators did not have good situational awareness of the system\u2019s state. They operated the system with one set of assumptions, while the system was actually in a completely different state.<\/p>\n<p>That dynamic can happen at the individual level\u2014when you think about medical errors in hospitals. Often, \u201cI just assumed you needed this dose,\u201d but in fact, the dose was changed, or someone else gave a contradictory drug. It can also happen at the system level, when people within systems operate based on one set of assumptions that don\u2019t match reality. More often than not, that comes from insufficient sensing of the system itself and its environment.<\/p>\n<p>In that respect, if you think about what AI brings to the table\u2014the major enabler of this revolution we\u2019re seeing\u2014it\u2019s the greater-than-ever ability to sense systems and create better situational awareness of those systems. That gives us a major set of capabilities to sense systems better than ever and understand the signals we get.<\/p>\n<p>Let me give a few examples. Let\u2019s think about retail. When I grew up, we would go to the local grocery store, and the person would write down what we bought, and we\u2019d pay once a month. Then came credit cards and point-of-sale data, so you could track and sense what customers were buying. Today, we have sensing systems that can track everything a customer looks at, or even talks about near Alexa. You talk about something, and the next day you get a promotion for that something. These are examples of very advanced sensing\u2014sometimes scary\u2014because they basically know everything about us, and it allows the system to know what I\u2019m interested in, what might fit me best, and operate in a much more effective and efficient way.<\/p>\n<p>Now let\u2019s give a completely different example. I\u2019m wearing a Whoop. You\u2019re wearing an Apple Watch, right? So think about that. That\u2019s a sensor that monitors our health status or vital signs in a completely continuous manner\u2014something that in the past could only be measured if you came to a doctor\u2019s office and someone put a device on you.<\/p>\n<p>In that respect, I think AI is bringing a lot to the table. The flip side is that AI-enabled systems are more complex. They bring more complexity because there are more sensors, more advanced algorithms that can be increasingly opaque. There\u2019s a lot of data and inputs now being brought to bear that you don\u2019t fully understand.<\/p>\n<p>That complexity and that opaqueness bring a lot of risks. Like most things in life, there are pros and cons. There are things getting better, allowing us to manage risk and do things we never could before\u2014like, if I\u2019m a bank today, I can track for fraud in a much better, more efficient way with the capability of analyzing unstructured data. But on the other hand, we\u2019re creating, maybe without realizing it, more and more complex systems that are really opaque, and that\u2019s a major source of risk and something that can harm resilience.<\/p>\n<p>David Puner: Resilience. So we\u2019ve heard you talk about resilience and AI-enabled systems. What does that really mean and why is it so critical right now?<\/p>\n<p>Retsef Levi: Okay, so let\u2019s just step back. I think that the importance of resilience became very apparent to all of us, particularly during the COVID years, when many critical systems in our life did not work anymore and were seriously disrupted. Many of them were systems that we would\u2019ve taken for granted\u2014that they are there and will always operate.<\/p>\n<p>Right. And since then, resilience became a major issue for essentially every company. But not only companies\u2014most governments, especially the federal government in the US, are very concerned with resilience. People talk about the resilience of the medical supply chain, the resilience of semiconductor chips\u2014all of these supply chains that enable critical aspects of our society and some of the most critical industries for national security and the wellbeing of all of us, like food and so forth.<\/p>\n<p>Interestingly, I don\u2019t think that we currently have a very formal or agreed-upon way to think about resilience. That\u2019s something I\u2019ve been thinking about also as an academic, because I believe that if you don\u2019t have a clear, increased concept of what you mean when you talk about resilience, then it\u2019s going to be very hard to manage it.<\/p>\n<p>David Puner: Mm.<\/p>\n<p>Retsef Levi: Where I ended up in my thinking about this is using the concept of what I call irregular operations to define resilience. So let me explain what I mean by that. If you think about any organizational system or business system, it has an operating scheme. This is the way the system operates on a regular basis, and that operating scheme, more often than not, has a lot of means to manage uncertainty.<\/p>\n<p>So, for example, if you think about the operating scheme of an airline company, clearly it would be problematic if they didn\u2019t have a lot of things to deal with, let\u2019s say, weather. However, any operating scheme has what we call enabling operational conditions, or operational boundaries. These are the things you have to have in place to allow the system to operate in steady state successfully. Most of the time in regular operations, these enabling conditions are there and the boundaries are not broken.<\/p>\n<p>But there are times when these enabling operational conditions are not valid anymore\u2014they are disabled for one reason or another\u2014and the operational boundaries of the system are broken. In that case, the system has to adopt an entirely different operational strategy and tactics, and resilience, in my mind, is the ability of systems to do this successfully. In fact, many risk management failures stem from the failure of the system to sufficiently manage irregular operations.<\/p>\n<p>Now, the disruption could be quite broad\u2014it could be a pandemic, a cyberattack, a supply chain disruption, a political or geopolitical event, a regulatory event. But more often than not, it turns out that the operational enabling conditions of systems are not necessarily the shiny, expensive things.<\/p>\n<p>David Puner: Okay?<\/p>\n<p>Retsef Levi: More often than not, they are the things you take for granted. My favorite example: until 2020, if you asked hospitals whether surgical gloves are an enabling condition for them, they\u2019d tell you, \u201cWhat are you talking about? This is the cheapest supply in the hospital.\u201d We have all these expensive, shiny stuff\u2014the buildings, the doctors\u2014but until you don\u2019t have surgical gloves, you cannot perform surgeries.<\/p>\n<p>The lesson is that it\u2019s not obvious for an organization to understand what the enabling operational conditions are that the organization relies on. In fact, I would argue that most organizations operate without having a good or comprehensive\u2014and definitely not collective\u2014understanding of the operational boundaries of their system and the enabling operational conditions they have. That\u2019s a source of big risk. Why? Because if you don\u2019t actively and purposefully know that something is enabling your operations, then you might not even sense that your system is shifting from regular operations to irregular operations. And coming back to situational awareness, if you don\u2019t have that situational awareness, if you don\u2019t have the plan to react, then you are more likely than not going to fail.<\/p>\n<p>David Puner: So 2020 and what came to follow\u2014that wasn\u2019t a wake-up call for companies, or do you think they\u2019ve come a long way and they\u2019re just still not there?<\/p>\n<p>Retsef Levi: Absolutely, it was a wake-up call. I think companies are now thinking about this more and more. They want to do it. Cyber is one of the things that you probably know they think about. But the fact that you want to think about something doesn\u2019t mean you immediately get it right or think about it the right way.<\/p>\n<p>David Puner: Right?<\/p>\n<p>Retsef Levi: Usually companies and institutions want to buy something that will solve the problem. The fact of the matter is that you actually need to fundamentally change the way you manage your internal processes and really understand your system, because if you don\u2019t understand your system, it\u2019s going to be very hard to manage it and keep it resilient. I think some companies are doing it much better. Some led even before COVID in terms of resilience.<\/p>\n<p>If you think about companies like Cisco or Apple, they understand their supply chain five or six levels upstream. They will validate the supplier of the supplier of the supplier, and they will have transparency and monitoring of all these nodes in their supply chain. They understand that sometimes a very upstream disruption can trickle downstream and affect their operations and their resilience. They want to know about this in a timely manner and act.<\/p>\n<p>That\u2019s the physical supply chain. Let\u2019s take it back to cyber. If you think about many of the cyberattacks happening in recent years\u2014at least many of the successful ones\u2014they actually leverage what I call the digital supply chain. The attacker or the attack vector leverages the fact that our operations increasingly rely on digital products, on software, on digital products that have their own supply chain.<\/p>\n<p>David Puner: Mm.<\/p>\n<p>Retsef Levi: When it comes to our digital supply chains, I think most companies, if not all, are completely blinded. At best, you know who you\u2019re buying an app from, but you don\u2019t know how this app was developed. You don\u2019t know what submodules were inserted or are being used by this app. Most companies don\u2019t have a very good clue about their digital supply chain.<\/p>\n<p>David Puner: So with digital supply chain\u2014with so many components coming from different sources\u2014how can security leaders even begin to map and secure that ecosystem?<\/p>\n<p>Retsef Levi: I think it\u2019s a very non-trivial challenge. It starts by asking questions and building best practices to understand your digital supply chain and having sensors to try and figure it out. For example, one sensor could be: I need to understand all of the vulnerabilities that emerge and which of them could potentially have an impact on me. Some of this might be done by companies themselves, some by service providers going forward.<\/p>\n<p>Some of it should potentially be addressed by regulatory requirements, because currently we have far fewer regulatory requirements for basically describing what the product is. Take a completely different example: when you take a pharmaceutical drug, the vendor is obliged to tell you exactly what is in the drug; otherwise it won\u2019t be approved. If you buy a car, you get a clear description of all the parts and where they\u2019re coming from. When you buy a software package, more often than not nobody tells you where these things are coming from.<\/p>\n<p>In fact, many software modules used today are developed as open source. Those of us who understand what this means know there are hundreds of thousands of individual developers contributing to that code. Most of them are completely anonymous, and they\u2019re managed in ways that I don\u2019t think anybody fully understands. If you think about the contrast between the examples I gave\u2014drug, car, and then a software piece that might be responsible for some of the most critical operations and functions in your organization\u2014that\u2019s a little bit scary. It\u2019s not surprising that many cyberattacks are leveraging that to us.<\/p>\n<p>David Puner: Which is a little ironic considering open source software is often seen as a way to increase transparency.<\/p>\n<p>Retsef Levi: Theoretically speaking, yes. But if you don\u2019t understand the creation process and who got involved in that creation, then I don\u2019t think you really get transparency, or at least not transparency sufficient to protect yourself from malicious actors.<\/p>\n<p>David Puner: Are there practical steps that companies can take to use open source software safely, or to use the products that open source software creates?<\/p>\n<p>Retsef Levi: I think it\u2019s really an industry effort to change the standards of how these projects should be managed and the transparency they should give to the people who use them. Currently, I don\u2019t think there are expectations\u2014at the industry level or the regulatory level\u2014and we will have to change that.<\/p>\n<p>I\u2019ve done active research on this, and we are trying to understand the different management practices of different open source projects: the composition of contributors, the people who participate in the development; the dependencies between these products\u2014because these products sometimes use each other; how quickly vulnerabilities are detected; and what the exposure time is. What we are seeing suggests high heterogeneity between projects, and it cannot be that all of them are doing it optimally. We need to learn the best practices and then expect more and more well-thought standards.<\/p>\n<p>David Puner: You\u2019ve described AI models or data\u2014and even their developers\u2014as opaque. How can organizations tackle that opacity? Is it realistic to make these systems more transparent or auditable? That seems like the answer, but is it realistic?<\/p>\n<p>Retsef Levi: Before I answer that, I want to illustrate the challenge. If you think about traditional AI\u2014the very basic one\u2014people took some dataset and ran linear regression on it. You usually owned the dataset as the organization; you knew what it was. The model was maybe one parameter for each variable, and it was very interpretable. You could understand the data and the model.<\/p>\n<p>Today, if you think about large language models, for example, they\u2019re trained on the entire internet. You don\u2019t have a clue what data is being used there, so you have complete opacity to the data, and it changes over time. They also have north of 10^15 parameters, and nobody really understands how these things work. Then you take these modules and either use them to do things in an automated way, or you let human operators interact with them in a way that is completely not controlled and not standardized.<\/p>\n<p>Just to illustrate the level of opacity: this is a highly opaque system that almost by design has unknown boundaries. I\u2019m not sure I have all the answers about how to do this. But I know some of the questions we need to ask ourselves.<\/p>\n<p>First: how do we put guardrails that make sure even if something goes wrong\u2014and we are not aware of it because it\u2019s opaque\u2014it will have a limited impact on our operations? Second: we need much more standardization and transparency about how people are using these tools. To some extent, most companies implementing agents\u2014copilots, agents\u2014don\u2019t carefully monitor how people are using them. They leave it to you to figure out, or maybe they give some guidance.<\/p>\n<p>We are creating environments where we will have human intelligence and agents, and AI. We need to start treating these AI agents as another \u201chuman\u201d we need to worry about in terms of individual behaviors, the same way companies are worried about an employee doing something really bad or misusing privileges. I\u2019m sure CyberArk\u2014that\u2019s one of the scenarios CyberArk tries to address. We\u2019ll have to develop and expand these concepts to think about machine agents operating within our business environment. They will have permissions, roles, behaviors, and some will be uncontrolled or unexpected. We\u2019ll need sensing systems to identify this as quickly as possible and prevent or contain the damage.<\/p>\n<p>One of the things that is fundamentally important in cybersecurity, but often missed, is that most of the focus is on preventing a problem\u2014preventing an attack, blocking an attack. As an attacker, I know it doesn\u2019t matter how much effort you put into prevention\u2014you don\u2019t want to make it extremely easy to attack you; you want to make it fairly hard\u2014but\u2026<\/p>\n<p>David Puner: Right back to the resilience.<\/p>\n<p>Retsef Levi: Yes. You have to assume that you\u2019re going to be attacked successfully. If someone really wants to attack you, they will be able to attack you. Resilience lies more with the ability to detect fast\u2014to sense fast\u2014that you were attacked, and recover fast. Not necessarily being able to prevent.<\/p>\n<p>Statistics today\u2014and I think they\u2019ll get worse as systems become more complex\u2014show companies sometimes take, on average, six months to know they were attacked. That\u2019s a staggering statistic.<\/p>\n<p>David Puner: Right. And to that point, in cybersecurity, many focus on prevention. Why is that not enough, and what mindset shift is needed so companies also focus on detection and recovery?<\/p>\n<p>Retsef Levi: The principle of these complex systems is that the attacker has an advantage because the attacker is the initiator. When you defend\u2014when you are in prevention mode\u2014you are static; you are responsive almost by design. Yes, you should do things to make it hard for the attacker, but experience suggests that given a sufficiently qualified attacker\u2014and we haven\u2019t even spoken about this, but these AI technologies are giving attackers tools they didn\u2019t have in the past, both in sophistication and in scale\u2014they can try many times, simultaneously, and increase their capacity.<\/p>\n<p>So you have to assume that you are going to be attacked successfully\u2014you\u2019re going to be penetrated, something will be disabled. The question is how quickly you can detect that and how quickly you can recover, because finding out in two hours, three days, one week, or six months will make all the difference in the world for your organization\u2019s impact and resilience. I cannot emphasize this more: being able to sense that you were attacked\u2014that you are in irregular operations\u2014and recover quickly with minimal impact is absolutely critical, and I don\u2019t think we\u2019re emphasizing it enough at the moment.<\/p>\n<p>David Puner: So with attackers already using AI to scale and adapt, how should organizations respond? Is it really going to take AI to fight AI, or do the fundamentals still hold up?<\/p>\n<p>Retsef Levi: I think it\u2019s going to be a combination of things. And again, I don\u2019t know where we\u2019re going to end up. This is an ongoing evolution, right? You don\u2019t have a crystal ball. The defenders do something, then the attackers do something in response, and back and forth. That\u2019s how life works.<\/p>\n<p>But one thing I\u2019d like to emphasize is it doesn\u2019t matter how sophisticated the technology you leverage to defend, the best defense is understanding your system, and the best defense is having your operators understand the system they operate. The more they understand it, the better chance you have that they\u2019re not going to make mistakes and will be able to detect these types of attacks. That goes back to a set of processes and behaviors you want to be disciplined about. Because even today, most of the attacks are essentially the result of human errors.<\/p>\n<p>David Puner: Mm.<\/p>\n<p>Retsef Levi: Not all of them, but most of them are. There is a significant element of human error, and typically this human error comes from lack of sensing, lack of situational awareness, lack of discipline and clear processes, or cultural issues where people just don\u2019t follow what they\u2019re supposed to follow.<\/p>\n<p>And, by the way, having sensing about that\u2014when people don\u2019t follow what they\u2019re supposed to do before something happens\u2014that\u2019s part of the sensing capabilities organizations have to put in place. Unfortunately, we often think about it as, \u201cOkay, give me this silver-bullet product that will give me the solution.\u201d That\u2019s more often than not insufficient. It could be a pivotal element in what you\u2019re trying to do, but if you don\u2019t build around it a process and situational awareness of your system, these products are not going to be very effective.<\/p>\n<p>To some extent, identity verification is part of creating this kind of transparency, if you think about it.<\/p>\n<p>David Puner: As AI changes the threat landscape, do you see identity becoming an even bigger target? And what kinds of attacks concern you most?<\/p>\n<p>Retsef Levi: Absolutely. We\u2019ve tended to think so far about human identity\u2014user identity\u2014and when we said user identity, traditionally we meant humans. Now we\u2019re going to have user identity associated with agents\u2014AI agents\u2014or machines. We will need to expand that process.<\/p>\n<p>The other thing is, traditionally we think about identity verification as a transactional thing. \u201cI need to log into the computer and you\u2019re going to verify that I am who I am.\u201d Going forward, identity verification will have to expand its notion to something continuous. I\u2019m going to look at what you\u2019re doing vis-\u00e0-vis what you are expected to do, and I\u2019m going to try to highlight the deviations or prevent variations. To me, that\u2019s a broader notion of identity verification that is not just about who you are, but also about what you\u2019re supposed to do versus what you\u2019re actually doing. We\u2019ll need to expand in that direction to address the challenges that are going to grow. Deepfakes and other things now at the disposal of attackers are creating a lot of new scenarios that I don\u2019t think we even know all of at the moment.<\/p>\n<p>David Puner: With so much talk about automation, how do you figure out what should stay human and what\u2019s safe to hand over to machines?<\/p>\n<p>Retsef Levi: We are going through a new revolution in how we design work. Historically, the first revolution was the industrial revolution\u2014the assembly line. The premise was: we\u2019re going to take very complex tasks and break them into very small, repeated tasks that can be done at scale by relatively untrained people.<\/p>\n<p>David Puner: Mm-hmm. Assembly line style.<\/p>\n<p>Retsef Levi: Yeah, the assembly line. That drove one revolution in how we do work. The second revolution was the re-engineering or service-sector revolution, where we realized that when you provide service, the inputs and outputs are unpredictable. You need to train people responsible for outcomes\u2014sometimes called caseworkers\u2014people responsible for an outcome. You\u2019re a guest in my hotel; my outcome is that you\u2019re going to be happy. To accomplish that, I need to coordinate and do tasks, sometimes supported by experts.<\/p>\n<p>Now, the current revolution we\u2019re going through is again changing the way we design work. First, it automates a lot of tasks\u2014not only the simple ones on the assembly line, which are the easiest to automate because they\u2019re very repetitive and predictable. That\u2019s the low-hanging fruit. But now, with more advanced tools, we are going to automate a lot of caseworkers. We are going to either replace them or put agents nearby them to help with that integration.<\/p>\n<p>A third change is sensing. We are going to have sensing that is far more advanced than we ever had, using many modalities of sensors we put in place to understand our complex systems. The challenge is, as we do this, we are losing transparency into how the system actually works. When you put an agent\u2014especially if we\u2019re not careful and don\u2019t know what we\u2019re doing\u2014these copilots and agents, and the way humans use them, are going to create a lot of unexpected patterns in our system.<\/p>\n<p>So the key will be getting more order and discipline into how we use these tools, then putting the right sensing systems over them so we have a clear understanding of what we expect to happen. That\u2019s one key element. I don\u2019t think we\u2019re currently designing or formulating an understanding of what we expect to happen, because that\u2019s really about process design.<\/p>\n<p>The second thing is putting sophisticated sensing capabilities in place to understand when there is deviation from what is expected. Without those two elements, you\u2019re going to be very challenged to manage both risk and resilience.<\/p>\n<p>David Puner: There\u2019s a lot of concern about humans losing skills as AI takes over more tasks. How real is that risk? And with your perspective speaking to young future leaders in the classroom at MIT, what can we do to prevent it\u2014and how are they looking at that risk?<\/p>\n<p>Retsef Levi: I\u2019m glad you brought it up, because I\u2019m extremely concerned. At a high level, I call it human capability erosion. One thing that is misleading\u2014there are studies that say, \u201cIf I take one of my employees and put them with a copilot, I increase their productivity.\u201d That\u2019s great. Amazing. But what this overlooks is: if you took an experienced employee who knows how to do the work from experience and then you add augmentation with an agent, that\u2019s one thing. It\u2019s fundamentally different than if that person was initially trained with the presence of that copilot.<\/p>\n<p>My joke is we all use Waze and navigation apps. Very useful. They enhance our ability to get from one place to another. My joke is that if Waze goes down tomorrow, there will be some kids who won\u2019t be picked up from school because their parents won\u2019t know how to get there. It sounds extreme, but there\u2019s a core of truth.<\/p>\n<p>When I train people, I need to be very careful about the difference between augmenting skills\u2014enhancing performance\u2014and building capability. We confuse performance and learning. Think about what you understand the most in life. I\u2019d bet it\u2019s the things you struggled to understand\u2014where you had to make an effort, tried once, it didn\u2019t work, you failed, and ultimately overcame the difficulty. Same with sport: high performance comes from trying again and again, failing, and the effort involved.<\/p>\n<p>Many of these tools remove the effort element. \u201cI have a question, I\u2019ll type to chat, it gives me the answer, I feel good,\u201d but I\u2019m not learning. I\u2019m shutting down my brain to some extent. We have to think strategically about how we use these tools and not confuse training and building human capability with just enhancing performance. They are not the same concepts.<\/p>\n<p>To make the point more relevant here: the culture of leadership is affected\u2014not positively\u2014by digitization. People feel they don\u2019t need to go to the field and see their system with their own eyes. As a leader in practice, there was never a time I went to watch a phenomenon\u2014complex, in my own eyes\u2014and didn\u2019t come out with a major insight. We say, \u201cLet\u2019s use data,\u201d but we forget most data are created by existing systems and processes. If you don\u2019t understand those, you don\u2019t understand what\u2019s going on.<\/p>\n<p>\u201cI\u2019ll sit and look at numbers, analyze them, apply sophisticated models.\u201d That\u2019s important and powerful\u2014I do it all the time. But it has to be contextualized. The only way to get context is by going and watching, getting a sense, the intuition, at the tips of your fingers. We have fewer leaders and workers with that sense. The risk is that by injecting these systems, we will lose that touch.<\/p>\n<p>David Puner: Do you think part of the challenge is that people, and maybe even organizations, struggle to see the full picture\u2014to peel back all the layers and understand both the objective facts and the more nuanced elements at play?<\/p>\n<p>Retsef Levi: I would not use the words subjectivity and objectivity because they almost suggest an assessment. I want to say something different: we need to understand what humans and machines are good at.<\/p>\n<p>David Puner: Hmm. Okay.<\/p>\n<p>Retsef Levi: Machines are good at many things. They\u2019re fast, multitasking, they don\u2019t get tired or emotional, they can process unstructured data. Humans are far better than machines at understanding nuances and context. If you put them in a situation where they lose that ability\u2014something you can only do when you have a sense of the system you operate\u2014you will lose a critical capability of your system. That is critical to resilience. A lot of the time, resilience is understanding that something in the context changed. This is not the same context we usually have; we need to do something different. Machines can help, but there will always be points where only humans can figure it out. My concern is we are losing that capability. That would be quite scary, and organizations in that situation will have a problem.<\/p>\n<p>David Puner: You\u2019ve said addressing AI risks will take companies, governments and academia working together. What does that look like in practice\u2014standards, alliances, something else?<\/p>\n<p>Retsef Levi: I don\u2019t think we have a good answer. When you think about these developments and resilience more broadly, one interesting thing is resilience is in everyone\u2019s interest. You can\u2019t find stakeholders who think resilience is not important. The problem is it\u2019s not obvious who pays for resilience. Most economic models we have for markets and ecosystems don\u2019t have that concept well defined.<\/p>\n<p>So one aspect is: how do we think about resilience at the industry, market and national levels? What should government require as regulatory expectation? What should be regulated by industry? What should be standards? Who pays? How do we price it? I don\u2019t think those concepts are well thought out yet. That\u2019s a ripe area for collaborative thinking across industry, government and academia.<\/p>\n<p>Another thing: many of the questions we raised require sharing data across stakeholders. With resilience\u2014and specifically cyber\u2014to improve the system you need to share data. For obvious reasons, that\u2019s a challenge; nobody wants to share their own attack data. We need brokering mechanisms to share data safely. Academia can play a role as a trusted partner. And there are many questions for which I don\u2019t know the answer, and I feel nobody else does either. We need to figure it out together. Collective thinking from different perspectives will be critical.<\/p>\n<p>David Puner: Do you think market forces alone can or will ever incentivize resilience, or will it require regulation\u2014or something like cyber insurance requirements\u2014to push organizations to invest in it?<\/p>\n<p>Retsef Levi: My guess is that some elements of regulatory intervention and more explicit expectations about what you should and should not do will have to play a role. To some extent, this is one of those risks where, if you look at an individual company, the absolute risk is small. If you look at an industry, the absolute risk is almost certain, and the disruptions and impact could be very high. We need more centralized intervention\u2014with limits, of course. We don\u2019t want to overregulate and inhibit innovation.<\/p>\n<p>There are also issues related to data transparency. A few large players have taken over what I would consider the next natural resource\u2014data\u2014and they control it in many ways. We need to worry about that too. There\u2019s a complex set of issues we have to think about deeply, and we need all the stakeholders at the table to figure out the answers together.<\/p>\n<p>David Puner: Really fascinating stuff. So much we know and don\u2019t know\u2014probably more that we don\u2019t know. Thank you, Retsef, for coming on the podcast. Really great to speak with you. Hope we can do it again sometime down the road.<\/p>\n<p>Retsef Levi: My pleasure. Anytime. Thank you very much for the opportunity to share some thoughts.<\/p>\n<p>David Puner: All right, there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review. We\u2019d appreciate it very much, and so will the algorithmic winds.<\/p>\n<p>What else? Drop us a line with questions or comments. And if you\u2019re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is Security Matters podcast, all one word @ cyberark.com. We hope to see you next time.<\/p><\/div>\n","protected":false},"featured_media":219830,"template":"","class_list":["post-217174","podcast","type-podcast","status-publish","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>EP 16 - Sensing the signals: The hidden risks in digital supply chains | CyberArk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EP 16 - Sensing the signals: The hidden risks in digital supply chains\" \/>\n<meta property=\"og:description\" content=\"Modern digital supply chains are increasingly complex and vulnerable. In this episode of Security Matters, host David Puner is joined by Retsef Levi, professor of operations management at the MIT Sloan School of Management, to...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/\" \/>\n<meta property=\"og:site_name\" content=\"CyberArk\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/CyberArk\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-07T10:55:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"1400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@CyberArk\" \/>\n<meta name=\"twitter:label1\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data1\" content=\"35 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/\",\"url\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/\",\"name\":\"EP 16 - Sensing the signals: The hidden risks in digital supply chains | CyberArk\",\"isPartOf\":{\"@id\":\"https:\/\/www.cyberark.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg\",\"datePublished\":\"2025-09-25T17:39:53+00:00\",\"dateModified\":\"2026-04-07T10:55:22+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg\",\"width\":1400,\"height\":1400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.cyberark.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EP 16 &#8211; Sensing the signals: The hidden risks in digital supply chains\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.cyberark.com\/#website\",\"url\":\"https:\/\/www.cyberark.com\/\",\"name\":\"CyberArk\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.cyberark.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.cyberark.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.cyberark.com\/#organization\",\"name\":\"CyberArk Software\",\"url\":\"https:\/\/www.cyberark.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"CyberArk Software\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/CyberArk\/\",\"https:\/\/x.com\/CyberArk\",\"https:\/\/www.linkedin.com\/company\/cyber-ark-software\/\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"EP 16 - Sensing the signals: The hidden risks in digital supply chains | CyberArk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/","og_locale":"fr_FR","og_type":"article","og_title":"EP 16 - Sensing the signals: The hidden risks in digital supply chains","og_description":"Modern digital supply chains are increasingly complex and vulnerable. In this episode of Security Matters, host David Puner is joined by Retsef Levi, professor of operations management at the MIT Sloan School of Management, to...","og_url":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/","og_site_name":"CyberArk","article_publisher":"https:\/\/www.facebook.com\/CyberArk\/","article_modified_time":"2026-04-07T10:55:22+00:00","og_image":[{"width":1400,"height":1400,"url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@CyberArk","twitter_misc":{"Dur\u00e9e de lecture estim\u00e9e":"35 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/","url":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/","name":"EP 16 - Sensing the signals: The hidden risks in digital supply chains | CyberArk","isPartOf":{"@id":"https:\/\/www.cyberark.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage"},"image":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage"},"thumbnailUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg","datePublished":"2025-09-25T17:39:53+00:00","dateModified":"2026-04-07T10:55:22+00:00","breadcrumb":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#primaryimage","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/09\/NDQ5Ny5qcGc-1.jpg","width":1400,"height":1400},{"@type":"BreadcrumbList","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-16-sensing-the-signals-the-hidden-risks-in-digital-supply-chains\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cyberark.com\/"},{"@type":"ListItem","position":2,"name":"EP 16 &#8211; Sensing the signals: The hidden risks in digital supply chains"}]},{"@type":"WebSite","@id":"https:\/\/www.cyberark.com\/#website","url":"https:\/\/www.cyberark.com\/","name":"CyberArk","description":"","publisher":{"@id":"https:\/\/www.cyberark.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cyberark.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/www.cyberark.com\/#organization","name":"CyberArk Software","url":"https:\/\/www.cyberark.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","width":"1024","height":"1024","caption":"CyberArk Software"},"image":{"@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/CyberArk\/","https:\/\/x.com\/CyberArk","https:\/\/www.linkedin.com\/company\/cyber-ark-software\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.cyberark.com\/fr\/wp-json\/wp\/v2\/podcast\/217174","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cyberark.com\/fr\/wp-json\/wp\/v2\/podcast"}],"about":[{"href":"https:\/\/www.cyberark.com\/fr\/wp-json\/wp\/v2\/types\/podcast"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cyberark.com\/fr\/wp-json\/wp\/v2\/media\/219830"}],"wp:attachment":[{"href":"https:\/\/www.cyberark.com\/fr\/wp-json\/wp\/v2\/media?parent=217174"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}