25 9 月, 2025
EP 16 – Sensing the signals: The hidden risks in digital supply chains

Modern digital supply chains are increasingly complex and vulnerable. In this episode of Security Matters, host David Puner is joined by Retsef Levi, professor of operations management at the MIT Sloan School of Management, to explore how organizations can “sense the signals” of hidden risks lurking within their software supply chains, from open source dependencies to third-party integrations and AI-driven automation.
Professor Levi, a leading expert in cyber resilience and complex systems, explains why traditional prevention isn’t enough and how attackers exploit unseen pathways to infiltrate even the most secure enterprises. The conversation covers the critical need for transparency, continuous monitoring, and rapid detection and recovery in an era where software is built from countless unknown components.
Key topics include:
- How to sense early warning signs of supply chain attacks
- The role of AI and automation in both risk and defense
- Best practices for mapping and securing your digital ecosystem
- Why resilience—not just prevention—must be at the core of your security strategy
Whether you’re a CISO, IT leader or security practitioner, this episode will help you rethink your approach to digital supply chain risk and prepare your organization for what’s next.
Subscribe to Security Matters for expert insights on identity security, cyber resilience and the evolving threat landscape.
Imagine buying a car with no make or model. You wouldn’t know what’s under the hood, who built it, or whether it’s safe to drive. That’s how most enterprises run their digital supply chains today—critical software made of unseen modules built by unknown contributors with vulnerabilities you can’t trace.
Resilience begins when the unexpected happens, and the only way to catch that shift is by sensing it before it’s too late. Prevention matters, but attackers always have the first move. Defenders must detect fast and recover faster. As AI and AI agents enter the mix, identity more than ever cannot be a one-time login.
It’s about continuous verification—behavior versus intent—across people, services, APIs, bots, and a growing landscape of AI-driven tools. Today’s guest, Retsef Levi, professor at the MIT Sloan School of Management, helped pioneer early cyber capabilities in Israeli military intelligence. Now he researches resilience in complex systems—from hospitals to global supply chains to AI-enabled enterprises.
In our conversation, Retsef unpacks why prevention alone isn’t enough, how to weave resilience into the fabric of organizations, and why identity must expand to include machines and AI itself. This is Security Matters. Let’s dive in.
David Puner: Retsef Levi, professor of Operations Management at the MIT Sloan School of Management. Welcome to Security Matters. Thanks for coming onto the podcast.
Retsef Levi: Thank you for having me. I look forward to the discussion.
David Puner: Absolutely. I presume this will be the best part of your day, considering that we’re recording fairly early for this particular podcast. So let’s just jump in. There’s a lot of great stuff to cover here. You’ve had an unusual path—from pioneering Israel’s first cyberattack capabilities as a military intelligence officer to leading research at MIT. How have those experiences shaped the way you think about AI and what it takes to build truly resilient systems?
Retsef Levi: Thank you. To be honest, I cannot take too much credit for planning my career. It came about somewhat by coincidence and a run of events that turned out to be very fortunate for me. I grew up in Israel and, like any 18-year-old, joined the military. At the beginning, I was planning to actually go to Academic Reserve, do a bachelor’s degree in mathematics and physics, and then decided that at the age of 18 I wasn’t ready to commit.
David Puner: Mm-hmm.
Retsef Levi: So I was sent back to the military and was being triaged. Ironically, one of the people that triaged me told me at some point, “Hey, you are not good enough in mathematics, so we are going to send you to some course in Arabic,” because I actually studied Arabic in school. And I was like, what do you mean I’m not good at math? I think I’m pretty good at math. That sounded at the time like a horrible statement, but it turned out that it was a course that ultimately led me to join a unit that was focused on special operations within the intelligence forces in the Israeli army.
At some point, I was tasked with some kind of startup within the organization to think about what new technologies were emerging and what opportunities they gave rise to. I was fortunate to be part of the pioneer team that developed some cyberattack capabilities. Then, at some point, the military sends you to go to university. So I was sent to study at Tel Aviv University and initially decided, okay, I’m going to do mathematics because I had to show this guy.
David Puner: Back to the math.
Retsef Levi: That I’m actually pretty good in mathematics, and I’m going to do psychology together with that. I wanted to study the brain and had some thoughts about that. It was funny—there were some problems with the schedule of the two concentrations, mathematics and psychology, and some bureaucracy didn’t allow me any flexibility. So at some point I said, for the heck of it, I’m just going to do mathematics. I had to choose a trend, and I opened the book and looked. I said, “Oh, theoretical math—I’m not good enough. I’m not smart enough. Computer science—I don’t like coding.” Then I saw the term operations research, and I was like, wow. I started reading. I didn’t know what it was, and it looked interesting because it was about decisions and modeling systems, and from the description, it sounded very interesting. I just, by instantaneous intuition, said, okay, I’m going to take that. That sounds interesting, knowing nothing about it.
A year later, the military called me to go back in the middle of my studies. So after one year of my bachelor’s degree, between 26 and 27, I stopped for two years, went back to do a role in the military, and then came back between 29 and 31 to finish my bachelor’s degree.
At that point, I had a few options in front of me. I was offered a promotion to lieutenant colonel, but I decided that the roles offered to me were not interesting enough. I already had two kids at that time, and I decided—with the encouragement of my wife—to take a bet and go do a PhD.
At the age of 31, I came to Cornell, did a PhD in operations research, and somehow ended up at MIT. I started in 2002. The PhD finished in 2005, and by 2006 I was at MIT as a faculty member, and I’ve been there since then. It’s probably something I would never have imagined, to be honest.
Even throughout my career, the time in the military—or maybe outside academia more generally—really shaped the way I think about research in several ways. The first is that I try to motivate my research with things that are really practical—things where you can see how academic research touches actual decision-making or actual complex systems that operate under a lot of uncertainty.
Because if you think about most business systems and other systems in our life, they need to manage a lot of uncertainty. That’s always something that attracted me as an academic because I was also dealing with that as a practitioner when I was in intelligence. So that attracted me to think about very complex systems in different areas.
I’ve been working in the area of healthcare—a lot of work with healthcare systems, manufacturing biologic drugs, food and agriculture supply chains, water systems more recently, and retail logistics. Essentially, I can get interested in any complex system that operates under a lot of uncertainty and with great consideration to risk. I think that tendency probably originates from my background in the military and the way I think about uncertainty. It’s really through the notion of risk and intelligence, and I think there’s a lot of overlap between my academic work and my experience as an intelligence officer.
Now, AI to some extent seems like a new trend, but one way to look at it is as a new set of technologies that allow us to design work, make decisions, and design business and organizational systems that are just more intelligent. But many of the dynamics, including issues related to resilience, are not going to be fundamentally different. They’re going to be potentially new aspects, maybe important ones, but it’s not an entirely new set of problems. There are close connections between things I was thinking about before AI emerged as the next buzzword and how I think about it now in the context of AI.
David Puner: Really interesting—this thread about your curiosity in uncertainty and complexity and how it’s taken you down this path. AI and AI-enabled systems—do you think it enables uncertainty, or is it a way to cut through uncertainty, if it’s even on that thread?
Retsef Levi: I think the answer is both, and let me give a few examples that maybe illustrate why it’s both. In my mind, one of the things that is very important to managing risk is sensing.
If you think about a complex system and the human operators or decision-makers who have to make decisions within that system, and you analyze big mistakes—major failures of systems—more often than not, they stem from the fact that the operators did not have good situational awareness of the system’s state. They operated the system with one set of assumptions, while the system was actually in a completely different state.
That dynamic can happen at the individual level—when you think about medical errors in hospitals. Often, “I just assumed you needed this dose,” but in fact, the dose was changed, or someone else gave a contradictory drug. It can also happen at the system level, when people within systems operate based on one set of assumptions that don’t match reality. More often than not, that comes from insufficient sensing of the system itself and its environment.
In that respect, if you think about what AI brings to the table—the major enabler of this revolution we’re seeing—it’s the greater-than-ever ability to sense systems and create better situational awareness of those systems. That gives us a major set of capabilities to sense systems better than ever and understand the signals we get.
Let me give a few examples. Let’s think about retail. When I grew up, we would go to the local grocery store, and the person would write down what we bought, and we’d pay once a month. Then came credit cards and point-of-sale data, so you could track and sense what customers were buying. Today, we have sensing systems that can track everything a customer looks at, or even talks about near Alexa. You talk about something, and the next day you get a promotion for that something. These are examples of very advanced sensing—sometimes scary—because they basically know everything about us, and it allows the system to know what I’m interested in, what might fit me best, and operate in a much more effective and efficient way.
Now let’s give a completely different example. I’m wearing a Whoop. You’re wearing an Apple Watch, right? So think about that. That’s a sensor that monitors our health status or vital signs in a completely continuous manner—something that in the past could only be measured if you came to a doctor’s office and someone put a device on you.
In that respect, I think AI is bringing a lot to the table. The flip side is that AI-enabled systems are more complex. They bring more complexity because there are more sensors, more advanced algorithms that can be increasingly opaque. There’s a lot of data and inputs now being brought to bear that you don’t fully understand.
That complexity and that opaqueness bring a lot of risks. Like most things in life, there are pros and cons. There are things getting better, allowing us to manage risk and do things we never could before—like, if I’m a bank today, I can track for fraud in a much better, more efficient way with the capability of analyzing unstructured data. But on the other hand, we’re creating, maybe without realizing it, more and more complex systems that are really opaque, and that’s a major source of risk and something that can harm resilience.
David Puner: Resilience. So we’ve heard you talk about resilience and AI-enabled systems. What does that really mean and why is it so critical right now?
Retsef Levi: Okay, so let’s just step back. I think that the importance of resilience became very apparent to all of us, particularly during the COVID years, when many critical systems in our life did not work anymore and were seriously disrupted. Many of them were systems that we would’ve taken for granted—that they are there and will always operate.
Right. And since then, resilience became a major issue for essentially every company. But not only companies—most governments, especially the federal government in the US, are very concerned with resilience. People talk about the resilience of the medical supply chain, the resilience of semiconductor chips—all of these supply chains that enable critical aspects of our society and some of the most critical industries for national security and the wellbeing of all of us, like food and so forth.
Interestingly, I don’t think that we currently have a very formal or agreed-upon way to think about resilience. That’s something I’ve been thinking about also as an academic, because I believe that if you don’t have a clear, increased concept of what you mean when you talk about resilience, then it’s going to be very hard to manage it.
David Puner: Mm.
Retsef Levi: Where I ended up in my thinking about this is using the concept of what I call irregular operations to define resilience. So let me explain what I mean by that. If you think about any organizational system or business system, it has an operating scheme. This is the way the system operates on a regular basis, and that operating scheme, more often than not, has a lot of means to manage uncertainty.
So, for example, if you think about the operating scheme of an airline company, clearly it would be problematic if they didn’t have a lot of things to deal with, let’s say, weather. However, any operating scheme has what we call enabling operational conditions, or operational boundaries. These are the things you have to have in place to allow the system to operate in steady state successfully. Most of the time in regular operations, these enabling conditions are there and the boundaries are not broken.
But there are times when these enabling operational conditions are not valid anymore—they are disabled for one reason or another—and the operational boundaries of the system are broken. In that case, the system has to adopt an entirely different operational strategy and tactics, and resilience, in my mind, is the ability of systems to do this successfully. In fact, many risk management failures stem from the failure of the system to sufficiently manage irregular operations.
Now, the disruption could be quite broad—it could be a pandemic, a cyberattack, a supply chain disruption, a political or geopolitical event, a regulatory event. But more often than not, it turns out that the operational enabling conditions of systems are not necessarily the shiny, expensive things.
David Puner: Okay?
Retsef Levi: More often than not, they are the things you take for granted. My favorite example: until 2020, if you asked hospitals whether surgical gloves are an enabling condition for them, they’d tell you, “What are you talking about? This is the cheapest supply in the hospital.” We have all these expensive, shiny stuff—the buildings, the doctors—but until you don’t have surgical gloves, you cannot perform surgeries.
The lesson is that it’s not obvious for an organization to understand what the enabling operational conditions are that the organization relies on. In fact, I would argue that most organizations operate without having a good or comprehensive—and definitely not collective—understanding of the operational boundaries of their system and the enabling operational conditions they have. That’s a source of big risk. Why? Because if you don’t actively and purposefully know that something is enabling your operations, then you might not even sense that your system is shifting from regular operations to irregular operations. And coming back to situational awareness, if you don’t have that situational awareness, if you don’t have the plan to react, then you are more likely than not going to fail.
David Puner: So 2020 and what came to follow—that wasn’t a wake-up call for companies, or do you think they’ve come a long way and they’re just still not there?
Retsef Levi: Absolutely, it was a wake-up call. I think companies are now thinking about this more and more. They want to do it. Cyber is one of the things that you probably know they think about. But the fact that you want to think about something doesn’t mean you immediately get it right or think about it the right way.
David Puner: Right?
Retsef Levi: Usually companies and institutions want to buy something that will solve the problem. The fact of the matter is that you actually need to fundamentally change the way you manage your internal processes and really understand your system, because if you don’t understand your system, it’s going to be very hard to manage it and keep it resilient. I think some companies are doing it much better. Some led even before COVID in terms of resilience.
If you think about companies like Cisco or Apple, they understand their supply chain five or six levels upstream. They will validate the supplier of the supplier of the supplier, and they will have transparency and monitoring of all these nodes in their supply chain. They understand that sometimes a very upstream disruption can trickle downstream and affect their operations and their resilience. They want to know about this in a timely manner and act.
That’s the physical supply chain. Let’s take it back to cyber. If you think about many of the cyberattacks happening in recent years—at least many of the successful ones—they actually leverage what I call the digital supply chain. The attacker or the attack vector leverages the fact that our operations increasingly rely on digital products, on software, on digital products that have their own supply chain.
David Puner: Mm.
Retsef Levi: When it comes to our digital supply chains, I think most companies, if not all, are completely blinded. At best, you know who you’re buying an app from, but you don’t know how this app was developed. You don’t know what submodules were inserted or are being used by this app. Most companies don’t have a very good clue about their digital supply chain.
David Puner: So with digital supply chain—with so many components coming from different sources—how can security leaders even begin to map and secure that ecosystem?
Retsef Levi: I think it’s a very non-trivial challenge. It starts by asking questions and building best practices to understand your digital supply chain and having sensors to try and figure it out. For example, one sensor could be: I need to understand all of the vulnerabilities that emerge and which of them could potentially have an impact on me. Some of this might be done by companies themselves, some by service providers going forward.
Some of it should potentially be addressed by regulatory requirements, because currently we have far fewer regulatory requirements for basically describing what the product is. Take a completely different example: when you take a pharmaceutical drug, the vendor is obliged to tell you exactly what is in the drug; otherwise it won’t be approved. If you buy a car, you get a clear description of all the parts and where they’re coming from. When you buy a software package, more often than not nobody tells you where these things are coming from.
In fact, many software modules used today are developed as open source. Those of us who understand what this means know there are hundreds of thousands of individual developers contributing to that code. Most of them are completely anonymous, and they’re managed in ways that I don’t think anybody fully understands. If you think about the contrast between the examples I gave—drug, car, and then a software piece that might be responsible for some of the most critical operations and functions in your organization—that’s a little bit scary. It’s not surprising that many cyberattacks are leveraging that to us.
David Puner: Which is a little ironic considering open source software is often seen as a way to increase transparency.
Retsef Levi: Theoretically speaking, yes. But if you don’t understand the creation process and who got involved in that creation, then I don’t think you really get transparency, or at least not transparency sufficient to protect yourself from malicious actors.
David Puner: Are there practical steps that companies can take to use open source software safely, or to use the products that open source software creates?
Retsef Levi: I think it’s really an industry effort to change the standards of how these projects should be managed and the transparency they should give to the people who use them. Currently, I don’t think there are expectations—at the industry level or the regulatory level—and we will have to change that.
I’ve done active research on this, and we are trying to understand the different management practices of different open source projects: the composition of contributors, the people who participate in the development; the dependencies between these products—because these products sometimes use each other; how quickly vulnerabilities are detected; and what the exposure time is. What we are seeing suggests high heterogeneity between projects, and it cannot be that all of them are doing it optimally. We need to learn the best practices and then expect more and more well-thought standards.
David Puner: You’ve described AI models or data—and even their developers—as opaque. How can organizations tackle that opacity? Is it realistic to make these systems more transparent or auditable? That seems like the answer, but is it realistic?
Retsef Levi: Before I answer that, I want to illustrate the challenge. If you think about traditional AI—the very basic one—people took some dataset and ran linear regression on it. You usually owned the dataset as the organization; you knew what it was. The model was maybe one parameter for each variable, and it was very interpretable. You could understand the data and the model.
Today, if you think about large language models, for example, they’re trained on the entire internet. You don’t have a clue what data is being used there, so you have complete opacity to the data, and it changes over time. They also have north of 10^15 parameters, and nobody really understands how these things work. Then you take these modules and either use them to do things in an automated way, or you let human operators interact with them in a way that is completely not controlled and not standardized.
Just to illustrate the level of opacity: this is a highly opaque system that almost by design has unknown boundaries. I’m not sure I have all the answers about how to do this. But I know some of the questions we need to ask ourselves.
First: how do we put guardrails that make sure even if something goes wrong—and we are not aware of it because it’s opaque—it will have a limited impact on our operations? Second: we need much more standardization and transparency about how people are using these tools. To some extent, most companies implementing agents—copilots, agents—don’t carefully monitor how people are using them. They leave it to you to figure out, or maybe they give some guidance.
We are creating environments where we will have human intelligence and agents, and AI. We need to start treating these AI agents as another “human” we need to worry about in terms of individual behaviors, the same way companies are worried about an employee doing something really bad or misusing privileges. I’m sure CyberArk—that’s one of the scenarios CyberArk tries to address. We’ll have to develop and expand these concepts to think about machine agents operating within our business environment. They will have permissions, roles, behaviors, and some will be uncontrolled or unexpected. We’ll need sensing systems to identify this as quickly as possible and prevent or contain the damage.
One of the things that is fundamentally important in cybersecurity, but often missed, is that most of the focus is on preventing a problem—preventing an attack, blocking an attack. As an attacker, I know it doesn’t matter how much effort you put into prevention—you don’t want to make it extremely easy to attack you; you want to make it fairly hard—but…
David Puner: Right back to the resilience.
Retsef Levi: Yes. You have to assume that you’re going to be attacked successfully. If someone really wants to attack you, they will be able to attack you. Resilience lies more with the ability to detect fast—to sense fast—that you were attacked, and recover fast. Not necessarily being able to prevent.
Statistics today—and I think they’ll get worse as systems become more complex—show companies sometimes take, on average, six months to know they were attacked. That’s a staggering statistic.
David Puner: Right. And to that point, in cybersecurity, many focus on prevention. Why is that not enough, and what mindset shift is needed so companies also focus on detection and recovery?
Retsef Levi: The principle of these complex systems is that the attacker has an advantage because the attacker is the initiator. When you defend—when you are in prevention mode—you are static; you are responsive almost by design. Yes, you should do things to make it hard for the attacker, but experience suggests that given a sufficiently qualified attacker—and we haven’t even spoken about this, but these AI technologies are giving attackers tools they didn’t have in the past, both in sophistication and in scale—they can try many times, simultaneously, and increase their capacity.
So you have to assume that you are going to be attacked successfully—you’re going to be penetrated, something will be disabled. The question is how quickly you can detect that and how quickly you can recover, because finding out in two hours, three days, one week, or six months will make all the difference in the world for your organization’s impact and resilience. I cannot emphasize this more: being able to sense that you were attacked—that you are in irregular operations—and recover quickly with minimal impact is absolutely critical, and I don’t think we’re emphasizing it enough at the moment.
David Puner: So with attackers already using AI to scale and adapt, how should organizations respond? Is it really going to take AI to fight AI, or do the fundamentals still hold up?
Retsef Levi: I think it’s going to be a combination of things. And again, I don’t know where we’re going to end up. This is an ongoing evolution, right? You don’t have a crystal ball. The defenders do something, then the attackers do something in response, and back and forth. That’s how life works.
But one thing I’d like to emphasize is it doesn’t matter how sophisticated the technology you leverage to defend, the best defense is understanding your system, and the best defense is having your operators understand the system they operate. The more they understand it, the better chance you have that they’re not going to make mistakes and will be able to detect these types of attacks. That goes back to a set of processes and behaviors you want to be disciplined about. Because even today, most of the attacks are essentially the result of human errors.
David Puner: Mm.
Retsef Levi: Not all of them, but most of them are. There is a significant element of human error, and typically this human error comes from lack of sensing, lack of situational awareness, lack of discipline and clear processes, or cultural issues where people just don’t follow what they’re supposed to follow.
And, by the way, having sensing about that—when people don’t follow what they’re supposed to do before something happens—that’s part of the sensing capabilities organizations have to put in place. Unfortunately, we often think about it as, “Okay, give me this silver-bullet product that will give me the solution.” That’s more often than not insufficient. It could be a pivotal element in what you’re trying to do, but if you don’t build around it a process and situational awareness of your system, these products are not going to be very effective.
To some extent, identity verification is part of creating this kind of transparency, if you think about it.
David Puner: As AI changes the threat landscape, do you see identity becoming an even bigger target? And what kinds of attacks concern you most?
Retsef Levi: Absolutely. We’ve tended to think so far about human identity—user identity—and when we said user identity, traditionally we meant humans. Now we’re going to have user identity associated with agents—AI agents—or machines. We will need to expand that process.
The other thing is, traditionally we think about identity verification as a transactional thing. “I need to log into the computer and you’re going to verify that I am who I am.” Going forward, identity verification will have to expand its notion to something continuous. I’m going to look at what you’re doing vis-à-vis what you are expected to do, and I’m going to try to highlight the deviations or prevent variations. To me, that’s a broader notion of identity verification that is not just about who you are, but also about what you’re supposed to do versus what you’re actually doing. We’ll need to expand in that direction to address the challenges that are going to grow. Deepfakes and other things now at the disposal of attackers are creating a lot of new scenarios that I don’t think we even know all of at the moment.
David Puner: With so much talk about automation, how do you figure out what should stay human and what’s safe to hand over to machines?
Retsef Levi: We are going through a new revolution in how we design work. Historically, the first revolution was the industrial revolution—the assembly line. The premise was: we’re going to take very complex tasks and break them into very small, repeated tasks that can be done at scale by relatively untrained people.
David Puner: Mm-hmm. Assembly line style.
Retsef Levi: Yeah, the assembly line. That drove one revolution in how we do work. The second revolution was the re-engineering or service-sector revolution, where we realized that when you provide service, the inputs and outputs are unpredictable. You need to train people responsible for outcomes—sometimes called caseworkers—people responsible for an outcome. You’re a guest in my hotel; my outcome is that you’re going to be happy. To accomplish that, I need to coordinate and do tasks, sometimes supported by experts.
Now, the current revolution we’re going through is again changing the way we design work. First, it automates a lot of tasks—not only the simple ones on the assembly line, which are the easiest to automate because they’re very repetitive and predictable. That’s the low-hanging fruit. But now, with more advanced tools, we are going to automate a lot of caseworkers. We are going to either replace them or put agents nearby them to help with that integration.
A third change is sensing. We are going to have sensing that is far more advanced than we ever had, using many modalities of sensors we put in place to understand our complex systems. The challenge is, as we do this, we are losing transparency into how the system actually works. When you put an agent—especially if we’re not careful and don’t know what we’re doing—these copilots and agents, and the way humans use them, are going to create a lot of unexpected patterns in our system.
So the key will be getting more order and discipline into how we use these tools, then putting the right sensing systems over them so we have a clear understanding of what we expect to happen. That’s one key element. I don’t think we’re currently designing or formulating an understanding of what we expect to happen, because that’s really about process design.
The second thing is putting sophisticated sensing capabilities in place to understand when there is deviation from what is expected. Without those two elements, you’re going to be very challenged to manage both risk and resilience.
David Puner: There’s a lot of concern about humans losing skills as AI takes over more tasks. How real is that risk? And with your perspective speaking to young future leaders in the classroom at MIT, what can we do to prevent it—and how are they looking at that risk?
Retsef Levi: I’m glad you brought it up, because I’m extremely concerned. At a high level, I call it human capability erosion. One thing that is misleading—there are studies that say, “If I take one of my employees and put them with a copilot, I increase their productivity.” That’s great. Amazing. But what this overlooks is: if you took an experienced employee who knows how to do the work from experience and then you add augmentation with an agent, that’s one thing. It’s fundamentally different than if that person was initially trained with the presence of that copilot.
My joke is we all use Waze and navigation apps. Very useful. They enhance our ability to get from one place to another. My joke is that if Waze goes down tomorrow, there will be some kids who won’t be picked up from school because their parents won’t know how to get there. It sounds extreme, but there’s a core of truth.
When I train people, I need to be very careful about the difference between augmenting skills—enhancing performance—and building capability. We confuse performance and learning. Think about what you understand the most in life. I’d bet it’s the things you struggled to understand—where you had to make an effort, tried once, it didn’t work, you failed, and ultimately overcame the difficulty. Same with sport: high performance comes from trying again and again, failing, and the effort involved.
Many of these tools remove the effort element. “I have a question, I’ll type to chat, it gives me the answer, I feel good,” but I’m not learning. I’m shutting down my brain to some extent. We have to think strategically about how we use these tools and not confuse training and building human capability with just enhancing performance. They are not the same concepts.
To make the point more relevant here: the culture of leadership is affected—not positively—by digitization. People feel they don’t need to go to the field and see their system with their own eyes. As a leader in practice, there was never a time I went to watch a phenomenon—complex, in my own eyes—and didn’t come out with a major insight. We say, “Let’s use data,” but we forget most data are created by existing systems and processes. If you don’t understand those, you don’t understand what’s going on.
“I’ll sit and look at numbers, analyze them, apply sophisticated models.” That’s important and powerful—I do it all the time. But it has to be contextualized. The only way to get context is by going and watching, getting a sense, the intuition, at the tips of your fingers. We have fewer leaders and workers with that sense. The risk is that by injecting these systems, we will lose that touch.
David Puner: Do you think part of the challenge is that people, and maybe even organizations, struggle to see the full picture—to peel back all the layers and understand both the objective facts and the more nuanced elements at play?
Retsef Levi: I would not use the words subjectivity and objectivity because they almost suggest an assessment. I want to say something different: we need to understand what humans and machines are good at.
David Puner: Hmm. Okay.
Retsef Levi: Machines are good at many things. They’re fast, multitasking, they don’t get tired or emotional, they can process unstructured data. Humans are far better than machines at understanding nuances and context. If you put them in a situation where they lose that ability—something you can only do when you have a sense of the system you operate—you will lose a critical capability of your system. That is critical to resilience. A lot of the time, resilience is understanding that something in the context changed. This is not the same context we usually have; we need to do something different. Machines can help, but there will always be points where only humans can figure it out. My concern is we are losing that capability. That would be quite scary, and organizations in that situation will have a problem.
David Puner: You’ve said addressing AI risks will take companies, governments and academia working together. What does that look like in practice—standards, alliances, something else?
Retsef Levi: I don’t think we have a good answer. When you think about these developments and resilience more broadly, one interesting thing is resilience is in everyone’s interest. You can’t find stakeholders who think resilience is not important. The problem is it’s not obvious who pays for resilience. Most economic models we have for markets and ecosystems don’t have that concept well defined.
So one aspect is: how do we think about resilience at the industry, market and national levels? What should government require as regulatory expectation? What should be regulated by industry? What should be standards? Who pays? How do we price it? I don’t think those concepts are well thought out yet. That’s a ripe area for collaborative thinking across industry, government and academia.
Another thing: many of the questions we raised require sharing data across stakeholders. With resilience—and specifically cyber—to improve the system you need to share data. For obvious reasons, that’s a challenge; nobody wants to share their own attack data. We need brokering mechanisms to share data safely. Academia can play a role as a trusted partner. And there are many questions for which I don’t know the answer, and I feel nobody else does either. We need to figure it out together. Collective thinking from different perspectives will be critical.
David Puner: Do you think market forces alone can or will ever incentivize resilience, or will it require regulation—or something like cyber insurance requirements—to push organizations to invest in it?
Retsef Levi: My guess is that some elements of regulatory intervention and more explicit expectations about what you should and should not do will have to play a role. To some extent, this is one of those risks where, if you look at an individual company, the absolute risk is small. If you look at an industry, the absolute risk is almost certain, and the disruptions and impact could be very high. We need more centralized intervention—with limits, of course. We don’t want to overregulate and inhibit innovation.
There are also issues related to data transparency. A few large players have taken over what I would consider the next natural resource—data—and they control it in many ways. We need to worry about that too. There’s a complex set of issues we have to think about deeply, and we need all the stakeholders at the table to figure out the answers together.
David Puner: Really fascinating stuff. So much we know and don’t know—probably more that we don’t know. Thank you, Retsef, for coming on the podcast. Really great to speak with you. Hope we can do it again sometime down the road.
Retsef Levi: My pleasure. Anytime. Thank you very much for the opportunity to share some thoughts.
David Puner: All right, there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review. We’d appreciate it very much, and so will the algorithmic winds.
What else? Drop us a line with questions or comments. And if you’re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is Security Matters podcast, all one word @ cyberark.com. We hope to see you next time.