8月 27, 2025
EP 14 – Beyond secrets: Securing the future of machine identity

In this episode of Security Matters, host David Puner sits down with Matt Barker, CyberArk’s VP and Global Head of Workload Identity Architecture, for a deep dive into the exploding world of machine identities and the urgent need to rethink how to secure them. From his journey co-founding Jetstack and creating Cert Manager to leading CyberArk’s efforts in workload identity, Matt shares insights on why secrets-based security is no longer sustainable—and how open standards like SPIFFE are reshaping the future of cloud-native and AI-driven environments.
Discover how machine identities now outnumber humans 80 to 1, why leaked secrets are a “hacker’s buffet,” and how workload identity is becoming a cornerstone of Zero Trust architecture. Whether you’re a CISO, platform engineer, or just curious about the next frontier in cybersecurity, this episode offers actionable advice and a compelling vision for securing the age of AI agents.
It’s a scenario that’s becoming all too common. You’re the cybersecurity lead at a major enterprise. One morning your team discovers a single API key leaked on GitHub. Months old, never rotated, and still valid. Within hours, attackers slip inside your environment, not through a human password, but through a forgotten machine credential.
And with machine identities now outnumbering humans by more than 80 to one, that single key isn’t just a crack in the door—it’s a superhighway across workloads, containers, and AI agents powering your business. This is today’s frontline secret: sprawling, unchecked workloads multiplying faster than security teams can control, and even the most seasoned experts struggling to keep pace.
To help us make sense of this shifting identity landscape, today I’m joined by Matt Barker, Vice President and Global Head of Workload Identity Architecture at CyberArk. Matt co-founded Jetstack, created the open-source standard Cert Manager, and now leads CyberArk’s efforts to secure workload and machine identity at scale.
Let’s do this. Here’s my conversation with Matt Barker.
David Puner: Matt Barker, Vice President and Global Head of Workload Identity Architecture at CyberArk. Welcome to the podcast, Matt.
Matt Barker: Thank you for having me. Pleasure to be here.
David Puner: Absolutely. Where are you today?
Matt Barker: I am in Central London, a place called Kensal Rise. If anyone has watched the movie Notting Hill, I’m about a 10-minute walk from there.
David Puner: We are excited to have you here. So let’s start things off with a quick career dive. In the last 10 years, you co-founded Jetstack and created Cert Manager. Now you’re leading workload identity initiatives here at CyberArk. It seems like a bit of a whirlwind. How’d you get to where you are today? Today being the pinnacle—a guest on the Security Matters podcast.
Matt Barker: Yeah, I mean, pure luck. But obviously this is what I’ve been working toward—to be here on this podcast, as you know. But in all seriousness, yeah, it’s a bit of a strange start because I actually began my career selling books door to door. It just so happened that I managed to break the sales record of a director for a small startup in the UK called Canonical, which created an operating system called Ubuntu.
Ubuntu grew to become essentially the operating system of the cloud. And so it was really just by chance that I got into open source, although I was kind of interested in getting into software technology because I thought that’s where the future might lie. And since then, it’s just been me looking for various different ways of trying to pick an area that I thought was interesting and one that might have a bit of growth potential.
So I went from Linux to NoSQL with MongoDB. I was a very early employee of MongoDB. Then I saw the growth of containers, open source, and Kubernetes, and I felt so strongly about that that I thought I had to start a company around it. And that’s where Jetstack came from.
David Puner: What inspired you to focus on workload identity and machine identity security? Are there any pivotal moments in your career that shaped your approach to cloud native security?
Matt Barker: Well, I didn’t mean to get into cloud native security at all. I desperately wanted to get into cloud native. But I didn’t want to get into cloud native security. I saw security as the domain of people that always told me no. I couldn’t do that project with Linux or NoSQL or open source or MongoDB, or whatever it happened to be.
What I was really interested in was the open-sourcing of Docker and the container standard. And then my thought was that if containers were used extensively, you might have thousands, if not millions, if not billions of containers to run. How do you actually manage and organize all those containers?
So when Google came along and open-sourced their internal orchestrator, which was called Borg, as Kubernetes, I was blown away. Back in 2014 they were starting and stopping 2 billion containers a week. If you think about it, every time you spin up Gmail, it’s in a container being orchestrated by Kubernetes. And I thought, if that ends up happening across the rest of the world of IT, then there’s going to be a huge amount of demand around Kubernetes and all of the services that go with it.
Starting Jetstack was literally just to help people use Kubernetes and understand how to scale and build it properly. But in the process, we realized there were a lot of gaps in how people thought about securing Kubernetes. So, really just by chance, we decided to focus specifically on certificate management within that.
It was actually an engineer’s interview project to automate certificates inside Kubernetes. He did it over a weekend, came back to us on a Monday, we gave him the job, and then when he joined a month later, we decided to open source it. Relatively quickly, it grew to become the de facto standard for managing and automating certificates in Kubernetes. But it was never something we had originally set out to do.
David Puner: Cert Manager is still getting millions of downloads a day, right? And I assume with AI on the rise, it’s probably growing on a daily basis.
Matt Barker: Yeah. Christian was the guy who originally started it with a project called kube-lego. We relaunched it as Cert Manager. Originally it was just specifically for Let’s Encrypt, and then we made it CA agnostic.
But James Munley, one of our early engineers, took on that project and relaunched it as Cert Manager. He sent me some recent stats that said it’s downloaded 18 billion times a year. That’s quite phenomenal when you think about the amount of data that takes and how often it’s downloaded.
Subjectively, from my conversations with enterprises, banks, and retailers, every Kubernetes cluster they have is more than likely going to have Cert Manager in it. I’d say it’s in eight or nine out of every 10 Kubernetes clusters. I don’t foresee that download number slowing down, and it’s certainly one of the first cluster add-ons you’d install once you’ve got Kubernetes up and running.
David Puner: You’re the global head of workload identity architecture here at CyberArk. You’ve already mentioned the word “workload” a few times. Let’s level set here at the beginning of the conversation. Workload and workload identity—what is it, and why is it such a critical concept in today’s IT environments?
Matt Barker: For the past 20 or 30 years, we’ve thought long and hard about how we secure humans and human access to systems inside our organizations. If you think about your corporate network, you have two actors on it. The first is the human actors. The other actor on the network is machines.
You can think about machines in two categories. Physical machines: laptops, servers, cell phones—things that connect to your network. Then you have another category of machine, which is what we call a workload. A workload is essentially a piece of software that runs on that machine and connects to another piece of software. You could think of it as a container, a virtual machine, a Kubernetes cluster, or a serverless function.
As these workloads explode, one of the foundational ways we secure them is through certificates and public key infrastructure, or PKI. What was amazing about creating Cert Manager for Kubernetes is that it put us in a tremendous position in the center of this ecosystem of machine identity—even though I didn’t fully realize it at the time.
We developed a partnership with Venafi, specifically because they had a lot of customers trying to secure traditional on-prem data center machines using the Venafi suite to automate, manage, and rotate certificates.
David Puner: Venafi has of course since been acquired by CyberArk, and here you are.
Matt Barker: And they had a lot of those customers trying to connect those machines and use Kubernetes in the cloud. So we essentially created a connector between Cert Manager and the Venafi suite. It was at that moment I started to realize—thanks to speaking with them—that the rise of machines was going to happen, and the way we thought about protecting and securing those machines would become more and more important, simply because it’s something we’d ignored.
We’d thought a lot about securing humans on the network, but we hadn’t thought as much about securing machines. So when Venafi came along and said, “Would you like to be acquired by us?” it made complete sense to me. This is where the future of protecting our organizations will go, because machines, to a certain extent, are probably going to outweigh the number of humans inside the company.
And if AI ends up becoming as important as we think it does—and becomes a little bit more autonomous, and agents actually do become—then hackers are likely going to take advantage of those. So it’s probably a good bet to think about securing machines as the next frontier of how we secure an enterprise.
This was also at a time when the network was being completely and utterly obliterated. We used to put a firewall around our network and secure the perimeter. Whereas during COVID, when you’ve got a personal laptop connecting to a corporate network, how can you trust that machine? How can you trust that person? That meant breaking down the perimeter and thinking more about the individual machine itself—or the workload—and how we secured that, rather than keeping everything hoarded and protected inside our castle wall.
David Puner: We’re talking a lot about machine identities and the explosive growth—and how they now outnumber human identities by more than 80 to one. What’s driving this explosion, how does it tie back to workloads, and why is it such a challenge for security teams?
Matt Barker: To start with what’s driving it, we talk a lot about volume, velocity, and variety of workloads. I would say I’m partly the problem for the growth of workloads, because by enabling people to adopt cloud-native technologies—which means microservices architecture, public cloud with ultimate scale, containers, functions, and serverless—you’ve basically got a huge amount of consumption and growth in cloud-native workloads and use cases.
It was growing fast already. A few years ago we were talking about 20 to 1 for machine identities to humans. Then it was 45 to 1. In the past year or so, it’s gone to 82 to 1. I think that’s partly down to the number of AI agents, because we’re starting to unleash a swarm of AI agents, and each one of those agents needs to have an identity as well.
This is not going to slow down. I don’t know what ratio of workload identities to human identities we’ll be talking about in the next five years, but I can imagine it’s going to be significantly higher than 82 to 1, which already sounds quite high to me. The next huge acceleration in the number of workloads and workload identities will come from AI agents and everything that entails.
David Puner: So how does workload identity help secure AI agents, and what risks do they pose if not properly managed?
Matt Barker: At this point, we have to think about how you actually secure a workload itself, and what identity means to that workload. Most of the way we secure our machines is similar to how we’ve secured human identities—with the equivalent of usernames and passwords—and we refer to them collectively as secrets, whether it’s API keys, service accounts, or personal access tokens.
You put all those secrets in a secrets store, and every time that workload needs access to some third party, it retrieves the secret and uses it. The problem with secrets is they are hard to manage and hard to rotate. They get spun up and used in places the organization doesn’t know about. They get shared on Slack. They get hard-coded in—and unfortunately leaked—and they end up on GitHub. Secrets are everywhere, and they are very hard to manage, control, secure, and protect.
What I’m thinking about is extending all the work we’ve done with PKI and certificate lifecycle management, and building on those foundational pieces. Rather than using a secret to secure access to a workload, we’ll actually apply an identity—an open-standard identity—to the workload as it’s born. We’ll use the attributes of the workload so that we can trust it, and then we’ll use the identity to secure access to another workload.
So rather than managing all these different secrets and running the risk of leaking them, you can trust that the identity of the workload can talk to another workload, based on the inherent identifiers or markers that that workload has. Once you’ve trusted the workload—attributed it, registered it—you let it go do what it needs to do, and you use identity, not a secret, to secure access.
David Puner: You’ve described the current state of machine-to-machine communications as “identity hell,” driven by secret sprawl—as you were just getting into. What do you mean by that, and why are traditional secrets-based approaches no longer sustainable?
Matt Barker: If you struggle to manage your own usernames and passwords for websites or email, think about a machine that has to talk to a hundred different services inside an organization. Consider the number of equivalent passwords it has to remember and use at the right time—and you have to make sure they’re rotated correctly. It’s a lot of manual effort, and a lot of these secrets stores get used as dumping grounds. They get hard-coded, shared on Slack, and inadvertently leaked.
Now think about the number of machines increasing day by day. We’ve gone from 40 to 1 to 80 to 1, and maybe it could be a couple hundred to one. Multiply that by the number of different environments it runs across, each with slightly different ways of identifying and securing access to the workload. Then add some autonomy with AI agents: the agent will want to talk to any of your environments, workloads, or services at any time.
Think about the manual effort that goes into that—and the number of secrets that already leak in today’s situation. If you increase the intensity of workloads and the demand for access to all these services, it becomes almost impossible to manage. Most organizations have what a colleague calls Stockholm syndrome: they keep using secrets because that’s what they’ve always used. It becomes part of the day-to-day process: create the secret, put it in a store, fetch it, and use it.
What I’m proposing is that we radically improve the developer experience by using the workload’s identity to secure access. It shouldn’t matter whether it’s a data-center VM or an S3 bucket in AWS—we should apply a universal, consistent identity across environments, minimize the use of secrets, and start using identity itself for access between workloads.
David Puner: We’ve seen incidents like the 2024 U.S. Treasury breach where a compromised machine identity—an API key in that case—was exploited. How do these events highlight the vulnerabilities of secrets-based security models, and what lessons can security leaders take from it?
Matt Barker: Shout out to our partner GitGuardian. They do an annual State of Secrets Sprawl report. They said that on GitHub they found 24 million leaked secrets, up from around 17 million the year before—a roughly 25% year-over-year increase. They go back and check those secrets, and more often than not they’re still valid two years later—just sitting out there.
If you’re a hacker trying to get access to a corporate network, you could spear-phish someone in support—that might work—or you could just go to GitHub, find a leaked API key, do a bit of work, and you’re in. That’s what’s happening with these hacks we’re seeing almost weekly. An ex-CISO of a large European telco who now consults for CISOs refers to these leaked secrets on GitHub as a buffet for hackers: pick a secret, do a bit of work, and before you know it, you’re inside.
Most organizations don’t realize how big this problem is. There’s been a lot of buzz around the non-human identity space—startups that scan environments to find machine identities, understand who has access, what they’re used for, and then give you risk and context. But that’s only half the problem. You need to discover those secrets—and then reduce the number of them. Instead of using secrets, you can move to something else, like SPIFFE.
David Puner: Mm-hmm.
Matt Barker: And it’s becoming a really scary, horrible problem. The State of Secrets report also said you’re 40% more likely to leak a secret if you’re using AI-generated code. As we unleash AI agents and empower business analysts using low-code tools to create apps, we’re likely going to see a whole lot more leaks.
David Puner: So let’s talk about SPIFFE for a minute—the Secure Production Identity Framework For Everyone. That’s the longer name, quite a name. It seems to be a cornerstone of workload identity. What is it, and why is it gaining traction among major cloud providers and enterprises?
Matt Barker: SPIFFE has been around for quite a long time—I’m saying maybe around 2016. What most people don’t know is that SPIFFE was created by the same guy that created Kubernetes. So it’s actually from the same mind, as it were. And for those who don’t know, Kubernetes was the open-source version of Google’s internal orchestrator called Borg.
SPIFFE is essentially the open-sourcing of the identity framework they had inside Google called LOEs. I hope I’ve not got that wrong. My thinking was that when Kubernetes was open-sourced, most people would worry first about how to build and run their cloud-native architecture on Kubernetes. If it became successful, you’d have billions—if not trillions—of containers running in multiple cloud environments. At some point, you’re going to have to think about securing them. So when SPIFFE came out, I was quite excited, thinking, “This will be something we end up applying to secure all those different workloads.”
As it turns out, we didn’t—and not much actually happened with the SPIFFE project up until more recently. I think most people were more focused on building the compute and setting up the use cases. Honestly, collectively, it feels like the cloud-native community was mainly focused on just building the stuff for the first seven or eight years. It’s only in the past couple of years that we’ve really thought about securing it.
So SPIFFE is essentially an identity standard, and it was donated to the CNCF. It’s a graduated project—just like Cert-Manager, our project, which was donated to the CNCF and is also graduated.
What SPIFFE enables is a universal identity standard. Every time you spin up a workload, it has this concept of an SVID—a SPIFFE Verifiable Identity Document—and you can figure out how to consistently name a workload so it becomes unique and universal. Every time you spin up a workload, it gets its own identity. SPIFFE is the identity standard you apply to that workload.
Once you’ve registered your workloads and they’ve got their SPIFFE identity, you can start to trust the workload and that identity. And that SPIFFE identity is actually built on top of what you’re already doing with your certificates and your PKI infrastructure. So it means you don’t have to learn a whole new thing or implement a whole new system. You can use what already exists inside your organization—your preexisting PKI infrastructure.
What we’re saying is: go apply that SPIFFE identity and use that identity to secure access. If you can’t use identity, then use a secret—because we realize you can’t replace all of your applications and give SPIFFE to all of them on day one. But over time, by applying SPIFFE identity and using that to secure access rather than a secret, the portion of workloads you secure using identity will increase, and the number secured with a secret goes down. Ideally you want the option to do either, which is why we’ve created this new solution: the Secure Workload Access solution.
You asked why it’s picking up momentum now. I think it’s because of these hacks. There’s a lot of talk around zero trust and the CISA framework for that. And now we’re at a point where cloud-native has become so successful—there are so many workloads running all over the place, so many platform engineers who’ve built containers and Kubernetes architectures—that most organizations don’t really know where all their workloads are or what they’re doing.
I think everyone’s just standing still and thinking, “Oh my word. We need to understand where all our workloads are. We need to understand all of their identities. We need context for those identities, understand the risk profile. We need to try to reduce the number of workload identities we’ve got, and then we need a really consistent, strong way of applying identities so we can use that to secure these workloads going forward.” I think it’s really a matter of timing.
David Puner: So, zero trust. Workload identity is a key enabler of zero-trust architecture. How does it help organizations implement zero trust, and why is the alignment so critical for security leaders?
Matt Barker: The fundamental concept around zero trust is that you never trust; you always verify.
David Puner: Mm-hmm.
Matt Barker: Essentially, when the perimeter of the organization is gone—and you’ve got a workload that used to be inside your data center, inside the firewall, but now it’s somewhere in the cloud or on someone’s laptop at home—zero trust becomes the model. It’s a very confused concept and a bit of a buzzword; I feel like I could ask ten different people what zero trust means and get ten different answers.
What I always go back to is CISA—the Cybersecurity and Infrastructure Security Agency. There was a mandate by President Biden a couple of years ago that all U.S. agencies should start to implement a zero-trust strategy. CISA created that zero-trust framework and strategy. It’s like a strategy house with various pillars, all underpinned by governance and observability, and the first pillar is identity.
Identity doesn’t cover everything, but it applies to so many of those other pillars—devices, networks, applications, workloads, and data. One of the best things about SPIFFE is that it’s consistent and universal, and it can be applied to any of your workloads. You use it to register, attest, and understand where all your workloads are.
David Puner: How are compliance requirements influencing the adoption of workload-identity solutions?
Matt Barker: I’d say compliance is not the major reason people come to us about adopting workload identity. The major reason is they realize they have a problem: they don’t know where all their workloads are; they don’t know where all their workload identities are; they don’t know who has access; they don’t know how risky they are; they don’t know if they’ve been leaked. They realize they need to understand all of that—and then adopt something that will make things better going forward.
That’s probably the number one reason. It’s seeing the Department of the Treasury attack and understanding that just one leaked secret can cause tremendous damage. The threat of that is the main driver. The other reason is inconsistency. We’re in a multi-cloud world, and all these environments have different ways of identifying and securing workloads. Rather than having five different ways—and managing five different methods—they want one way. They want a universal identity standard or identity as a service, as some call it.
Another thing people bring up is the difficulty of securely connecting an on-prem data-center workload to a cloud workload. In the data center there isn’t a particularly good, fundamental concept of identity, so they want to adopt SPIFFE, apply identity to machines in the data center, and use that to secure access to cloud.
Then there’s the secret-zero problem. Think of it like the master key. You create a secrets store and put secrets in it—so you need a master secret to enter the store. Where do you put that? Probably another secrets store. Where do you put the master key for that? Another store. This is why you’ll see a lot of references to turtles in the workload-identity community—because it’s “turtles all the way down.” If you want to read the book on SPIFFE, it’s actually referred to as solving the bottom-turtle problem, which I relate to secret zero. That’s why turtles show up.
David Puner: How is that book? Is it a good read?
Matt Barker: Yeah, yeah, yeah. I’m sure you’ll love to read that on a Friday evening with a nice glass of port—or whatever it is you drink.
David Puner: All right. Sounds good. I wanted to get back to the community and a point we were on earlier: how do you see Cert-Manager and SPIFFE adapting to support identity for autonomous systems? And what role does open source play in keeping those systems secure and interoperable?
Matt Barker: I’m assuming AI agents. I like to think of an AI agent as a workload. Earlier we said a workload is a piece of software that talks to another piece of software—so apply a SPIFFE identity to it, register it, attest it, and treat it like any other workload in your environment.
The second thing is that an AI agent is going to go off and start talking to third-party systems on your behalf. It has attributes that make it similar to a human—it wants to access Salesforce, Jira, whatever else. That, to me, is a different problem to solve, more on the human side. What’s going to be interesting is how workload identity plays with the human side of what an AI agent needs to do.
There’s a lot of discussion in the community about how we best support and enable those AI agents to talk securely to those systems. It’s been fascinating to see vendors coalesce to try to solve this—firstly through the IETF working groups (we’re involved in the WHIP/WIMSE working group), and also at places like my Workload Identity Day Zero event, which I run around KubeCon twice a year in Europe and North America. There are always a lot of talks about how we solve this.
What’s great is that the identity industry is starting to coalesce around using SPIFFE for the identity of the AI agent itself, and then using other preexisting identity standards for the human side—OAuth is rapidly becoming a standard there. I think the combination of these open standards, together with the value vendors add on top, will end up being the way we think about securing AI agents.
David Puner: You speak regularly about AI regulation and open source. What do you think are the most exciting or concerning developments in AI right now—especially as they relate to identity and trust in machine-to-machine communication?
Matt Barker: The exciting thing is that the big foundation models—the famous ones—are proprietary: you think ChatGPT, Gemini, Claude. They come out of huge, heavily funded organizations. But only four or five months ago we had the DeepSeek moment, where we realized a group of Chinese researchers created a fully open-source model that was almost as good—90% as good—as some of these tremendously expensive foundation models.
What I find fascinating is that, while many models in use right now are proprietary and foundational, you can’t write off open source. My hypothesis a few years ago was that you’d end up with a long tail of open-source models used for specific purposes inside the enterprise. DeepSeek was one of them. You’ve got Llama from Facebook that was open-sourced, Qwen is another one—another Chinese model that’s very strong.
As an open-source guy, it’s fascinating to see the interaction between open-source models and the big proprietary ones. I’m particularly worried about data leakage and how certain models interact with customer or organizational data—the potential risk of model poisoning, secrets being leaked—all the usual stuff—and then AI being used by bad actors to increase attacks on organizations. It all increases the threat and the challenge of securing and protecting workloads and machines inside an organization.
A lot of companies I speak to are already quite rational and discerning. They’re not sticking everything into a public model online. They have stringent regulations and guidelines around how employees can use AI. I’m seeing more on-prem AI use cases: companies buying their own GPUs, building AI platforms from open-source components, keeping data protected and close to home in their own data centers, and applying AI locally rather than just sending everything to the cloud.
People are realizing how valuable and important the data is. The data creates these incredible models—and it’s valuable to software companies that want to use it to train the next model. So it becomes competitive advantage, your most-prized asset. The dynamics between speed of adoption, open source vs. proprietary, the potential threat from bad actors using AI—and how we’re all in a kind of race—
David Puner: Mm-hmm.
Matt Barker: You have to kind of balance the adoption of AI with the security of it. I think the dynamics around that are what I’m finding so interesting at the moment. It feels like we’re at the beginning of a whole, brand-new movement. This could be the foundation of the IT industry for the next 50 years, for example.
It feels a little like what I saw with Linux 20 years ago—and then, in a smaller way, NoSQL five to ten years later—and then the public cloud after that, and then Kubernetes and containers. I feel like we’re at the beginning of the age of the AI agents.
David Puner: So then, for organizations looking to adopt workload identity, where should they start? Are there specific high-impact areas or quick wins they should focus on first?
Matt Barker: You can’t protect or secure what you don’t know about. So I think the discovery-and-context piece is really important to begin with: understanding where your workloads are, what their identities look like, how risky they are, and whether there are any potential issues with those. First and foremost, you want a really good handle on that.
Once you’ve done that, think about using workload identity as a way to secure access for workloads. It’s not always easy—I used to get a lot of criticism: “It’ll take years before my applications are SPIFFE-aware.” But the pressure from attacks, the growth of workloads, and the realization that you don’t want long-lived credentials hanging around is really accelerating interest in testing this out.
What many people don’t realize is that they’re already using cloud-native projects like Istio (a service mesh). It already has SPIFFE in it—so if you’re using a service mesh, you’re already using SPIFFE. There are parts of the organization where it might be easy to use SPIFFE in the short term.
So yes, identify the low-hanging fruit—the easy places where you can use workload identity instead of a secret. But it’s not a destination; it’s a journey. Figure out how many workloads you’ve got, what the identities look like, the context and risk profiles. Then say, “There’s a set of workloads where we can easily apply workload identity—let’s do those first. For the rest, continue to use secrets.”
Over time, as education increases and the tooling gets easier, you’ll increase the share of workloads secured with identity. You might reach 40, 50, 60% using workload identity rather than secrets—whereas today it might be zero. That relative proportion will shift. That’s how a lot of my customers are thinking about it. They realize they’re going to have to do something.
David Puner: As far as that goes—education, starting somewhere—moving away from secrets requires a mindset shift. How can security leaders overcome resistance and get buy-in from their teams?
Matt Barker: A good workload-identity security solution should speed up how you build and deploy software. If you do it correctly and bring the right teams to build, manage, and deploy it, then every time a workload spins up, you know you can trust it straight away. The developer experience becomes easier: no messing around with secrets.
There’s a big incentive from the board or CISO to reduce secrets and use workload identity because it’s more secure and it helps you get a handle on workload identities. But you can also sell it to developers and platform engineers on speed: adopt this and it should make building, deploying, and managing workloads much easier—if you do it right.
And the amazing thing about CyberArk is that we already have a Secrets Manager. I’m not saying replace it or get rid of secrets overnight. Keep your existing Secrets Manager, and we’ll give you the option to apply workload identity where you can. So you get a hybrid approach. We’re trying to meet customers where they are and be pragmatic because we know it takes time.
But I’m also hugely passionate about getting people to build this into their applications—so they can get off the “drug” of using secrets. Right now, we want to get them out of the Stockholm syndrome and into thinking about workload identity instead of secrets.
David Puner: You’ve definitely conveyed that passion in this conversation, and I appreciate it. Before wrapping up, I’ve got to go back to something you mentioned earlier—your time selling books door to door. I saw somewhere that you did that in Montana. What were you up to there, and why Montana of all places—the third least densely populated state in the U.S.? And how did it factor into where you are today?
Matt Barker: One of my favorite places in the U.S.—it’s gorgeous. What a great state Montana is. Big Sky Country—what a place. It’s been quite well advertised recently with the series Yellowstone. But not many people knew about it before. Yellowstone National Park and Glacier National Park are absolutely incredible, by the way. Massive fan of Montana if anyone wants to take a trip.
I was a student at the University of St Andrews in Scotland, and I heard about this crazy exchange student trip where you’d get trained to sell door to door. They’d send you to various locations around the country, and your summers would be spent knocking on doors trying to sell books.
David Puner: Mm-hmm.
Matt Barker: I was transported to Montana and given a town to live in—you had to find your own place. I literally knocked on doors to find my first place to live. Then you’d start your day knocking from about 8 a.m. until 7 or 8 p.m., trying to sell books, take deposits, and use the deposits for living money.
It was like a video game: take deposits, have a bit more money—I could upgrade my ramen noodles to ramen with salami in it. After about a month, I had enough to buy a car, which enabled me to get into the countryside and sell there. It was a series of steps—like running your own business from scratch. I did that for three summers.
Sales—and the rigor of getting up day in, day out, demoing books for 10 hours, understanding objections, getting inside the door, sitting down to have the conversation, getting over issues, and closing—was foundational to my career. I took those principles and applied them to selling software.
David Puner: Any great tips on getting bugs off windshields?
Matt Barker: It wasn’t bugs I had a problem with. In Montana there are a lot of animals and roadkill. It was more avoiding the deer.
David Puner: To wrap up—and thank you so much for everything—what’s one piece of advice you’d give to CISOs and other security leaders who are grappling with the challenges of securing machine identities?
Matt Barker: …
David Puner: Are you interested in buying an educational book today? Is that the right approach? Did I do that right?
Matt Barker: You did it. You did.
David Puner: Do you have to stand three steps in front of the door? I’m trying—
Matt Barker: I’m thinking whether I’d be able to sell a book to you at this point. Do you have one of those “No Soliciting” signs outside? I used to hate looking at those signs.
David Puner: I once did a training to sell frozen seafood. It involved where you’re supposed to stand and how you get your foot in the door. Luckily I got a house-painting job the week later and didn’t have to do it.
Matt Barker: I used to have to compete against the Cutco guys—knives. And then the frozen seafood or meat—I can’t remember what that was called.
So—one piece of advice. What I’m seeing work is organizations creating a workload identity working group of key people across the org: platform engineering (cloud-native platforms), identity and access management, security architects. They create this working group and go through the steps I mentioned: discovery of workloads and workload identities; building context. That takes you part of the way—but then think a step further about how to go past that and do something better than before.
It boils down to: create a workload identity working group. Explore NHI discovery. Explore SPIFFE. Start thinking about what the strategy looks like over the next few years. The great thing about SPIFFE is it’s open source, part of the CNCF. There are working groups. The book is open source (spiffe.io). We do a Day Zero event every KubeCon. The next one is in Atlanta on November 10, where Uber and Square/Block will talk about their adoption of SPIFFE and how they use it at scale to tackle the problems we’ve discussed.
David Puner: And soon thereafter—or before—perhaps we’ll hear from those folks on the Security Matters podcast.
Matt Barker: Can we break that here? I hope so. I haven’t asked them yet, but I’ll go twist their arm and make sure they do it. If they’re listening—then you’d better do it.
David Puner: We would appreciate that.
Matt Barker: So yes, a workload identity working group is a really good start. If you don’t think it’s important to set up a working group, I think you’ll be convinced in the next six to twelve months as we start to see the fallout from the AI agent thing. Get ahead of the curve. Do it now. It’s a longer process, but we need to start tackling it so we’re ready for the swarms of AI agents coming over the next five years.
David Puner: Day Zero in Atlanta in November. Matt Barker, thank you so much for coming onto the podcast. This has been really great.
Matt Barker: Thank you so much for having me and letting me talk to my heart’s content about workload identity.
David Puner: All right—there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop. And if you feel so inclined, please leave us a review—we’d appreciate it very much, and so will the algorithmic winds.
What else? Drop us a line with questions or comments. If you’re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is [email protected]
(all one word). We hope to see you next time.