September 10, 2025
EP 15 – Why banks need to treat machine identities like VIPs
In this episode of Security Matters, host David Puner speaks with Andy Parsons, CyberArk’s Director of EMEA Financial Services and Insurance, whose career spans from the British Army to CISO and CTO roles in global financial institutions. Andy shares hard-earned lessons on leadership, risk management, and the evolving cybersecurity landscape in banking—from insider threats to machine identity governance and the rise of agentic AI.
Discover why “you can’t secure what you can’t see,” how manual processes fail at scale, and why treating machine identities as “first-class citizens” is no longer optional. Andy also explores the privileged access paradox, dynamic access management, and how AI is reshaping compliance, trading, and operational resilience.
Whether you’re a security leader, technologist, or financial executive, this episode offers strategic insights and practical steps to future-proof your organization in an era of accelerating digital risk.
You get a request to pull a laptop off the network, routine stuff, but the pushback is immediate and oddly defensive. That’s the clue. You press harder. Turns out the machine was quietly leaking, sensitive financial data, no alerts, just a reaction that felt off, and the thread that unraveled an insider theft.
Just like people, machines leave trails, but unlike people, they multiply fast and often go unnoticed. In financial services, risk often hides in these subtle tells. And while you’re catching those signals, machine identities like service accounts, APIs and bots are multiplying fast manual processes, lag reactive playbooks break at trading floor speed at agentic ai, autonomous systems acting on their own and suddenly identities aren’t background.
They’re first class citizens and they demand ownership, governance, and control. Our guest today, Andy Parsons, has lived this reality end to end. He’s cyber arts director of AMEA Financial Services and Insurance. He’s also a former C-I-O-C-T-O and CSO across global banks. And before that, he spent 15 years in the British Army honing discipline and mission focus.
From trading floors to boardrooms, he’s led through insider threats, outages, and the high stakes reality of moving trillions every day. This is security matters. Let’s dive in.
Andy Parsons CyberArk’s, director of AMEA Financial Services and Insurance. Welcome to the podcast. Thanks for coming on,
Andy Parsons: David. Thank you.
David Puner: Great to have you here. Andy, where are you sitting? Are you, are you in England?
Andy Parsons: Yeah, so home is just north of London to Ireland is quite a big city, but north of London, outside into the countryside end of the day for me today.
Uh, middle of the day for you. So
David Puner: yes, and thank you for joining us at, at the end of your day. Your career spans decades in financial technology and cybersecurity from the trading floor to the boardroom. And you’ve been a CIO, A CTO and a ciso. And before all that, you spent 15 years in the British Army.
How has that experience shaped your perspective on leadership and risk? And how have you seen the role of cybersecurity evolve in financial institutions over the years?
Andy Parsons: When you say it like that, it just makes me sound really old, but I think amazing opportunity to have been able to have two really good careers and I think there’s so many similarities in that, but I think, you know, that time in the military teaches you.
To think systematically about risk. So if you think about the life of someone in the military, you know that your whole purpose is defense and getting into situations that provide that safety. So I think a lot of my time, a lot of what I’ve been doing in the forefront of that is around people decisions and looking at operational security in the large context.
But most of all, it’s that instilled discipline around. Establishing processes, doing the right thing, documenting what you’re doing, making sure you are not leaving any stone unturned in terms of vulnerability. So along that journey is a kind of easy transition to get from sort of military into the world of technology security and everything that comes with that.
In the military, British military, we talk about the ethos of train hard, fight easy. So a lot of what I found myself doing over my career is. Constantly practicing, constantly asking those difficult questions of, you know, what could go wrong? Like what are those extreme but plausible scenarios that we can all sit around a table and poke at and look at and say, why would it work that way?
Why would we let someone do that? What are the risks? Risks? What could go wrong? What other scenarios? And the military’s very good at preparing you for the unknown and helping you figure out your own leadership style, how you manage people, how you manage crisis. But I think coming back to some of the similarities around, you know, I look at today in the role today as A-C-I-O-C-T-O C-suite in a bank.
You’ve got sort of three dimensions. You’re looking at the legacy that you are trying to manage, the today and everything around today, and the urgency. And you’ve also gotta learn to have one eye on the future and think about what are those, what if situations. So I think that’s very girdie become mission focused.
You. You have resource constraints in both sides of those types of jobs. You, you have to do team development and manage, you know, build high performing teams, which is the cool part. And then you’ve gotta kind of manage the poor performance, which in most cases takes up
David Puner: more
Andy Parsons: time than you would like to.
David Puner: Mm-hmm.
Andy Parsons: And amongst all of that, it is about managing, managing risk. And if you work in financial services, you are gonna be managing risk every day. Can I open the business? Can I trade? Can I execute day in, day out? And now 24 7, because that’s what me and you expect from our banking systems.
David Puner: Over the course of your career, you’ve held leadership roles during some challenging moments in financial cybersecurity.
Is there a key lesson or two that you’ve learned that’s shaped your approach to security today?
Andy Parsons: Yeah, absolutely. I mean, back to that point around ev, every day you are managing risks. You are managing huge schedules of change. Sometimes work, sometimes don’t work. Originally, you know, when I started, we talked about, and we thought about systems risk.
And actually, if you roll the clock forward, we’re now talking about operational risk and organizational risk and systems risk. And you’ve gotta look across those three dimensions and the massive shift. I think more recently that shift. From primarily human focused identity concerns to, you know, where we are today, where those ratios are, you know, 82 to one in terms of machine to human identity.
I think the key lessons, and there are many, I could not stop at three, but I think if I was to choose three, I would say you can’t secure what you can’t see. Sounds really easy. Uh, military thinking in here as well. So discovery, I’ve always talked about situational awareness. So what are the data flows?
What are those APIs? What data is moving around your organization? Data is the lifeblood of financial services. The points here around, you know, you can’t secure what you can’t see. React approaches. In my experience, they normally fail when you get to scale, so manual touchpoint, if I go back 20 plus years, one of the questions I would often ask teams when I joined a new part of the organization, and I’ve been lucky, I’ve worked for tier one banks all the way through to fintechs and startups.
But you go into those big businesses and say, okay, what is the process to ads or remove a NB from an account? Talk me through it, get the pen out, let’s get it on the board, and you start learning just, just quite how much manual touchpoint exist. So I think finding ways to make sure that you have got a more strategic way of dealing with that manual processes don’t scale with modern AI machine identity.
And I’ll give you a good example. You know, in, in, in a role I was in in the last few years, a core banking system that runs 20% of the world’s economy now, a couple of trillion dollars of accounting flow per day, and you’re running with 25,000 batches per night. You cannot manage that as a, as a human and spot it.
So we need to see how we manage that differently. And, and, and the lessons learned are, you know, those touch points are really what slow you down. I think the last thing in terms of lessons learned is watch for the reaction. I’ve seen, uh, people tend to give away responses to decisions when you push and, and certainly when they’re out of sync.
And I’ve been here many a times, just look for the reaction. The example I’ll give you is, yeah, I, I’ve been in a business that’s gone through a data theft.
David Puner: Mm-hmm.
Andy Parsons: And I was a criminal actor. It, it was an insider. And is this when you were a ciso? Yeah. Well, so, so joint role, so IT operations and ciso. So you, you go through that process of.
Situational awareness and say, right, what have we got? Give me a list of every asset, every Mac address, every IP address. Let’s match it. Let’s go and do that situational awareness, and we’ll call that discovery today. But you know, for this example, you know, I spotted a laptop. I didn’t like the look off.
And so we made some noise about let’s get that off the network, and the response to that demand was a little bit out of. Context for me and I that made me, that me dig my heels in further and say, okay, there’s clearly something. Yeah. Anyway, that turned out to be a laptop that was leaking the data. But I think you learn hard lessons and you have to accept that you’re not gonna get it all right.
But I think if you could follow those lessons, you know, you can’t secure what you can’t see. Mm-hmm. The reaction manual processes don’t work for you and watch for reactions. That would be my tough lessons learned.
David Puner: Do you miss your days as a ciso or are you, is it one of those things that you can only do for a certain length of time and, and then you need to do something else?
Andy Parsons: So, in many of the times, I was multi hatting, dual hatting. I was in large businesses, and it’s different around the world. You, you know, the regulators tend to decide. What job title you have and what accountability responsibility you have through some sort of defined matrix. So in some of the roles I’ve been the CIO and the cso, so I have to carry those two accountabilities.
Do I miss it? I don’t miss being caught up at two o’clock in the morning. Mm-hmm. As I dig the up, laptop out and log in and blurry eyes as you get older, that becomes more of a challenge. I don’t miss that, but I do miss being. It. And I think I’ve always been a person that’s on a bit of chaos and a bit of crisis, and that’s, I think that’s when companies get the best out me and as it turned out.
I’ve been through a lot, you know, whether it’s data thefts or data centers, flooding or trading floors going on fire, or, I mean, the list just goes on and on and that’s outside of a normal day, which is failed changes, you know, changing market conditions. If you look at the economic climate, these last sort of 10, 15 years, it’s been really difficult.
So, no, I don’t miss it from that perspective, but I just, I do miss being around. I level of intensity and action, but it’s, I’m gonna call it out. It’s, it’s a young kid’s game, so I, I’m quite happy using my experience to help others. I think I’m better off now.
David Puner: You’ve got such a calm demeanor, and I always find it striking that more often than not, when you speak to CISOs or a former CISOs, they are seemingly very calm as a collective group.
Do you think that is an essential trait for CISOs?
Andy Parsons: If you’ve got a large business and you’ve got a large charge of people to manage, you need to have some structure and some ability to make good decisions because the impacts are huge. So if you are running a core accounting system that’s shifting a couple of trillion dollars of accounting flow per day, you’ve gotta be able.
Assimilate all the information, ask the right questions, probe the right things, read the people. I think you do need good listening skills. That would be the thing I would, I would encourage is those listening skills become so important because there’s so much information thrown at you these days. And you know, in a CISO suite as a ciso, you know.
I tend to have your hand on the tools, right? I don’t have my hand on the day-to-day fine detail, so if you lose your calm, I, I don’t think that’s gonna make life better for your team. But equally, I think you do need to have the ability to turn up the dial in as an individual. You do need to be able to get hot and bothered pretty quickly because that’s how you make decision right back to the military is
David Puner: right.
Andy Parsons: You need people. I’ve been lucky about, I know how to react. I’ve been through enough in my, so both sides of my career, if you like. I think that you can portray calmness, but I do think it’s important to be able to react and respond and turn up the dial when you need to to get the response you need and get that sense of urgency.
I’ve been in many situations where.
Much into the detail to figure out what went wrong, when actually all I want is to recover the service, open the bank, get us working again, and fulfill our purpose.
David Puner: How do all these cumulative career experiences then culminate with where you are now? Here at CyberArk as director of AMEA Financial Services and Insurance, amea.
For the folks who may not know, the acronym is Europe, middle East, and Africa. How does it all land here? What are you doing at CyberArk?
Andy Parsons: Uh, so the role, ostensibly the role is to start. Having much more of a vertical segmented view into how we manage our customer groups. So one size fits all doesn’t really good.
I think financial services is very complex. I mean, just that term, financial services, it’s huge. It encompasses all sorts of different types of businesses, whether that’s, yeah, they’re really kind of big tier one banks that run global, universal banking all the way through to the fintechs that feed the system and come up with novel.
Solutions. You’ve got central banks who are, they’re government agencies. You’ve got payment systems, you’ve got market data and infrastructure companies. It’s a huge ecosystem. So actually what we need to do is verticalize across Europe, middle East, and Africa, get a little bit closer to our customers in terms of how do we deliver solutions and partner with them partnership.
So we need to understand. Perspectives, understand the world that they work in, and that’s what I bring to the table is that knowledge, that experience of what’s it like to be on the other side. Definitely more aligned to the sales side in supporting the sales cycle. But equally I could be, and you know, today’s a good example where I’ll be asked questions about specific regulations in a country and how our solutions work or don’t work and whether there’s a problem and just try to inject a little bit of knowledge and.
Into those conversations. So quite broad every day. I, so there’s a lot to cover, but we have a verticalized approach across. So I have a peer, uh, who looks at manufacturing and another, uh, colleague who looks at central governments. So between the three of us, we’re getting a lot more precise message.
David Puner: Moving then to something that I imagine you’re probably talking with customers a lot about these days is machine identities like service accounts, APIs and software bots, they now outnumber, as I’m sure you well know, human identities by more than 80 to one, and in the finance sector. It’s even greater.
It’s 96 to one as we found in our identity security landscape report. What security challenges does this machine identity explosion particularly pose for banks and insurers?
Andy Parsons: I love the fact that that number is definitely attracting interest. Uh, I’m gonna argue here that I think that might be actually quite conservative when you start looking at some of those big tier one banks.
I’m sure I’ve seen ratios even higher than that for context, but a lot of those machine identities, they’re not background processes. They’re the connectors that hold everything together. We’re a much more connected world, and actually the very nature of banking is about. Interconnectedness, interconnect and that network effect.
I mean, that’s how it works. That’s how you and I and our families can go and travel and pay and do all things that we wanna do, right? So interconnectedness global systems operate multiple countries with discrete controls to operate scale. So I think challenges are, they’re very obvious ones. There’s an invisible attack surface that most security teams.
Machine identities, let alone secure them. So I think there’s a big piece of work that that has exploded the last few years. I think there is an issue around cascade risk. So what I mean by that is the fundamentals of banking is about payments, right? Inbound and outbound, and one payment instruction. When you pay for your goods at a point of sale, there are dozens and dozens of different machine identities that are gonna get involved to allow that transaction to go from a merchant.
Through whatever systems. I mean, dozens, right? I could list them off, we could be it for a while. But you go through all those systems, they go all the way through the payment rails, all the way through the central banking system, through the liquidity engines, and then, and they go traverse all the way through another business and then all the way back down as a, as an acknowledgement.
So you, so that cascade risk is actually quite large. I think there’s a big scale mismatch as well. So we do spend. We have spent huge amount of money historically protecting human access. Right? I definitely remember days of engineers, uh, you do a lot of workouts out of hours when you work in technology in a bank because you, you can’t really touch things during the days, certainly in the cat markets type business.
So you’ve got engineers walking around with sheets of paper with using user passwords on. Just think thinking about that is, so again, back to that point about removing. We’ve got a lot of machine IDs that operate in shadows. We wanna remove manual process, we need to reduce the amount of touch points.
But I think often the first question I would ask, and I mentioned this before, is, you know who, who can, who can go in and, and touch databases? And, and that’s what we’ve got to, that’s the sort of thing that we’ve gotta manage. Whether it’s part of your discovery set or it’s something you know about and you’ve chosen not to do anything about it.
David Puner: You’ve talked about treating machine identities as first class citizens in security. What does that mean and how can organizations get a handle on it?
Andy Parsons: What it means is identities, you know, we want identities get to get the same systematic attention as human identities, right? We’re, we’re all very good at, at labeling the person and we’re all those kind of attributes and, and it is very easy.
We need to do the same. We need to trade identities as first class, and that concept is about ownership lifecycle, how you manage the lifecycle of that identity. How do you monitor what’s the governance structure? Uh, and I’d argue, you know, I’d, I’d go even further and say, you know what, at what point do we start looking at identities and doing a performance review on them in the same way we have a performance work?
And I think we probably do need to do that anyway, but it’s about thinking about it differently and treating them just like humans. It’s a
David Puner: lot of performance reviews to conduct.
Andy Parsons: Well, you can automate it, right? AI, this is ripe for, it’s to be able to start looking at how does AI learn from what it’s seeing?
How do you write the rules? How do we exploit? It’s a game changer. Mm-hmm. But I think that’s where we’re gonna have to go. So I like that. I like that idea, that notion of first class is a concept. We need to start thinking about it differently.
David Puner: So then what are some immediate steps organizations can take to address the risk posed by unmanaged machine identities, especially in financial services?
Andy Parsons: So back to my point earlier, you know, situational awareness is key. So you can’t secure what you can’t see. So you’ve gotta start with automated discovery. Be more curious, which is great, easy to say. But my experience is security people in the teams I’ve worked with, they are curious, you know, sometimes we call them the Ministry of Business Prevention, but that’s because they care.
And you know, I’ve got many, many examples where we’ve stopped bad things happening, but I think be more curious. There’s still lot of forgotten identities, you know, API keys from old projects, or expired certificates and certificate and other interesting subjects. Definitely have a look at assigning ownership back to that first and assign them ownership.
Every machine needs a human person accountable for it. Now, how you divvy that up? I think the quick win for me would be go, go and have a look at your dormant identities that suddenly become active, right? Then. There are many ways that you can monitor that, but if you could go and find your dormant identities, it’s that one that sat there, done nothing and suddenly pops up and starts working.
Now in the world of financial services. Happen and because we work in monthly, quarterly halfly cycles reports, and there’s lots of work that to go to regulators or to different parts of the organization to, to look at how the performance is going. But I think there’s sometimes that’s often your first sign of a compromise.
So there’s three, you know, three kind of good steps that you could take straight away to start looking at it.
David Puner: Let’s talk about AI and AI and financial cybersecurity. AI is often described as a game changer and a potential risk amplifier in financial cybersecurity. How is today’s AI different from the machine learning banks have used for decades? And why does it require a new approach to risk management?
Andy Parsons: I think it’s really important to make a key distinction here. So traditional machine learning, being around and playing with that stuff for more than 20 years, more than 25 years, probably. Machine learning is very narrow, somewhat contained by the data sets that feed it and manage it.
So typically the use cases were risk scoring. We could look at things like market risk and product risks. They’re two different things. We could look at analytics and specific use cases for a desk so they can look at data and pull that back and look at historic tick data. For example, if you’ve got a client on a line on your 32 line dealer board and you’ve got 10 panels and K, D, B.
Systems pulling back terabytes of data, instantly that phone goes. Machine learning’s great, right? Knows what to do, and you can use your quant analytics and your algo trading systems to drive those conversations. So that’s the distinction of machine learning, right? Today it’s very different. It’s much, much more expensive.
It’s much more autonomous. AI agents need to start looking across all those different silos inside a company to make those complex decisions and, and often act without human intervention. And, and this is distinctly different from the automation of batch files and processes that have sequential processing.
David Puner: Mm-hmm.
Andy Parsons: And an example, bring it to life a little bit, is around compliance. So compliance and the use of AI. Those agents would need to access just about every type of transaction and all the different. And the profiles and the threat and all the different vectors to be able to pull the picture for you to be, to do something with it.
There’s a competitive advantage to be able build agents to do that. And AI, AI in part, it is a game changer. It is gonna make a difference for those that learn to exploit it safely. With the right level of control, but AI systems need, you know, they need a lot of access to the right types of data. And we, and we’ve been, you know, historically we’ve all had our challenges with the types of data and trying to figure out how to get the best outta it.
But that makes us high, high value targets. And what you do next then becomes.
David Puner: Staying on the AI thread for a moment, you’d mentioned compliance. What other areas do you see as the most challenging or risky when it comes to AI in finance?
Andy Parsons: So I’ll, I’ll go back and my experience and where I started, uh, my career was on the trading floor securities and equities and treasury cap market.
So those are probably the right areas for exploiting this. So trading systems, direct market access, split second decisions. We’re talking nano response here, not micro. Time pressure. And what do you mean by that? So the way the market, so there is something called a fix, the financial information exchange, and that’s where you can actually load up your screens and execute on an automated schedule.
So those, those can happen in nanoseconds.
David Puner: Mm-hmm.
Andy Parsons: And they are basket trades. So you could have 50 trades going in at the same time because you wanna take advantage of the way the whole mechanics of the markets work. So you’ve got time pressure, you can only trade for a certain amount of time per day.
You’ve got market pressure, you’ve got other aspects, like you start the world with Asia and then it goes to Europe and then it goes to London, and then there’s a big gap before New York opens at two 30. So New York open at two 30 is where you could start building a massive headway into into market trading and getting better outcomes because you’ve seen what’s already happened in the rest of the system.
I think things like credit assessments are really important, so. Giving an agent access to customer data and all the reg regulatory implications of data. Things like GDPR and Privacy Shield, we know all about that. And then the last one is around compliance. So compliance to either your internal compliance frameworks or your regulatory compliance frameworks linked to your license.
And those are, could you imagine giving broad data access? Manage the regulatory reporting authority for intraday. But I think the key risk here is really around giving an agent, AI agent agent with trading authority that could cause a market disruption and that would happen, or could happen in seconds, nine hours, and that is terrifying thoughts.
That’s.
Financial services. Retail banking is typically slower. There’s more friction in the system. There’s more volume, so it’s gonna affect people in different ways. But I think trading cat markets type businesses where you are in the market and trying to execute quickly for either individuals or institutional clients, that’s where it’s gonna make a difference.
David Puner: All the sensitive data, obviously high value targets for attackers. What steps should or can organizations take in the next foreseeable future, so the next few months year to make sure their AI systems stay secure and accountable?
Andy Parsons: So the most important thing is to not treat AI like a black box. You’ve gotta be able to kind of pull this apart a little bit.
So I think the immediate priorities I would advocate are around inventory access patterns. So can you map what data each AI system touches? Yeah. End-to-end. There are lots of tools that can help you look at data flow and API flows. You could look at some identity governance. So treat systems like privileged users, right?
Treat those AI systems like first class systems. I think monitoring and building monitoring for behavior is, is gonna be another really important aspect to be able to detect when AI systems act in their normal behavior pattern and outside their normal behavior pattern, if that’s what you set them up to do.
And then I think ultimately there’s a, there’s a human oversight. Checkpoint here, which yeah, believe it or not, we we’re still gonna have someone paid to build frameworks and pull through data and look at what are the critical decisions that a human needs to go and then validate. But there’s a few things.
I mean, so none of those steps I imagine would take you, they’re not quick. They’re gonna take you time to establish, but that’s the start point.
David Puner: Do you think Agentic AI is an identity security problem?
Andy Parsons: Yes. I think agent AI is an identity security problem. Ai, you know, whether it’s on the machine learning side or generative, orent, ai, whatever you look at, we’re asking systems to go out and do complex work on behalf of what historically was automated work, right?
So we’re gonna go deeper. We’re gonna go further, we.
Track that if you know the identities. So if you’ve got an AgTech system that can generate new robots or generate new agents, like you can’t help that freewheeling. And I think the other aspect is the regulatory frameworks right now, they, they are very clear on you must have control of them. So in order to have control of them, you’ve gotta be able to identify them.
So you get that full loop.
David Puner: Moving over then to privilege access and what you’ve referred to as the privileged access paradox in financial services. How does it play out in banks’ day-to-day operations and And what is the privilege access paradox?
Andy Parsons: It’s a real challenge for us, for the here and now. So financial institutions, large and small.
They are caught between two opposing sort of problems here. So they need solid, privileged access controls to meet those regulatory. Compliance requirements and the risk management, but they also need speed and agility and this, and this is the paradox that everyone’s going through is how much control, permission, ability, do I give something versus how quickly they can then go and do that?
And if it’s done well. We’ll get good outcomes very quickly. If it’s done poorly, we’ll get bad outcomes very quickly, and that’s where the risk reward becomes a problem. So a couple of examples I could probably bring to life here. So the first one is, a trader on a trading desk needs more access than your normal person in their bank.
They write their own code. They manipulate data, they execute trade flow, but they need instant access to execute time sensitive trades. Security team doesn’t want to give them that access because it’s wide access. Now there’s your challenge. I’ve gotta trust people, but I’ve also gotta put the right amount of control around it so we can give them the right access and allow them to do their jobs very quickly.
We are happy customer’s, happy the market’s open. We’ve done what we meant to do. If you think of it from a DevOps, I was think DevOps is a great example. We went from a day where we had just developers, we had operational people, and we put the two jobs together. We’ve given them a huge amount of access.
We’ve given them tools that allowed them to build entire environments with a few lines of code. So we wanna give them broad access to system to do stuff really quickly. That creates a massive security exposure potentially if you have compromised accounts or they, you know, I’ve had teams that have compromised accounts.
I’ve had teams that have spent an entire quarter’s worth of budget building stuff that we didn’t really need because they were just giving it a go. And so compliance investigations and compliance officers will need even deeper access to go and investigate any of those issues. Where that goes wrong or even just to validate that was a good, uh, execution of a good trade that might have been outside of the normal strategy for the day.
David Puner: That brings to mind zero trust. How does zero trust factor into how you’re thinking about all this? And then how can emerging tech, like ai, bringing it back to ai again, help solve the privilege, access paradox?
Andy Parsons: It’s a good start point, isn’t it? So you know, zero trust. So that’s the paradox, and we’re gonna have to work our way through.
You learn and you get more granular about the level of access that you wanna give. So you start with zero trust and then you just open that up for as far as you wanna, you know, you’re gonna have to be princip based. It’s gonna have be dynamic. We’re gonna have respond quicker. We’re probably gonna have, have agents that allow you to.
To allow those permissions to actually execute what they need to. So it’s gonna come a full circle, but the right thing to do is to get those steps in place, get the governance in place, make sure you unders understand and document the limits that those agents can work on.
David Puner: You mentioned dynamic. What role does Dynamic access Management play in making all this happen?
Andy Parsons: They could be the mediator. If you can make real time decisions based on, I mean, you know, give them context. So risk business needs, market conditions, and allow AI to be that intelligent mediator to make its own decisions. We’ve always been quite binary, so rather than having something that’s always yes or always no, we’re gonna have to start moving towards.
Something that is yes, but under these conditions for this amount of time, for this additional context, and knowing when the end is, but again, that’s not new. We’re just taking it that next step in terms of allowing those agents to be the mediators of what they need to do. So again, get the context of how dynamic you want them to be.
Make sure there’s that element of being time bound. Calling it risk, not so easy. But I think I go back to behaviors and say, what does normal look like? What does good look like? What does abnormal look like? And start looking at, at what point do you want another set of controls to kick in to start saying or throttling how much exposure.
But you know, with time ones, for example, you could look at that in terms of how would you pass off from a trading desk in Asia to Europe to the us. Agents would be very good at knowing when someone is doing something outta the norm and being able to kind of shut down certain parts of the system. So that’s a dynamic bit is, you know, intraday things happen very quickly.
You know, our trading floor is, is, is very much driven by what is going on in the whole world, and you are trying to get ahead of that where you can or respond to it quicker than everyone else.
David Puner: Moving into strategic and practical advice. What’s the first step for financial institutions just starting to tackle machine identity and AI governance?
What quick wins can they focus on over the next few months?
Andy Parsons: Yeah, and you know, quick wins. Six months, 12 months, you can’t argue with data. So get that discovery done. I think you’ve gotta automate that discovery. Keep it running. Build the profile so you can understand what changes and figure out how do you map your machine identity landscape.
I think that’s gonna be the first part. Most organizations find more machine identities than they expect, and you know, we’ve, we’ve proven this actually ’cause we have a value assessment process that we do. Which normally attracts an awful lot of attention from C level when we start showing them the amount of dormant accounts or dominant identities or identities that have never been used, but they’ve spent a lot of time setting them up.
So there’s a lot of low hanging fruit that you could go after for that immediate impact.
David Puner: As I’m sure you well know, it isn’t easy to get for security initiatives. How can security leaders make the case for things like machine identity governance to the C-suite and the rest of the business?
Andy Parsons: The short answer is there’s no single one way of doing this.
You’ve got to frame it in lots of different dimensions and perspectives. So generally within financial services, you are always looking and thinking about those three aspects of operational risk systems. Resilience and organizational resilience. So you’re looking at those three. So I’d frame it as operational resilience, not just security.
I think that’s, that’s a kind of easy one to go after and people are better at doing that these days. There’s lots of examples that you can point to where a single compromised machine identity can just. Stop a trade law in its tracks. It can create a data theft that cripples an entire organization. And the reputational aspects of that are worth always putting forward to say, you know, frame it as an operational resilience and it’s gonna enhance that posture.
Regulators like that language. Demonstrate how that single issue could actually compromise a wider issue and damage your business. And then, you know, ultimately the reputational damage, which having been through a data theft myself, that can take a decade to repair. So you’ve gotta use language, and the big part of my day job now is just helping us use the right.
And those matching principles of what does the CO want to hear about, they might wanna hear about return of investment in R oi. You gotta be careful on with that, depending on the time of year. So you can talk about cost avoidance. You can use great language around competitive advantage, particularly when you’re talking with product people.
I hear the term heavily regulated a lot and that doesn’t do it justice. I’d argue almost every single business is under some kind of regulatory burden, but I think in financial services, if you can paint the picture of regulatory readiness and that will position you well for managing current compliance requirements and future requirements and being ahead of that game.
David Puner: I’m glad you mentioned regulations and regulators, because with regulations like Europe’s Dora and the EU AI Act starting to address AI and machine identity risks, how do you see compliance requirements evolving and what should organizations be doing now to stay ahead?
Andy Parsons: We’ve had vendors and third parties have caused problems and you’ve never really been able to hold them account other than underneath the, the contract that you have with them.
So I think there’s some evolution recently. Dora is, you know, the requirements or the operational resilience requirements include testing, specifically talk about ungoverned machine identities are operational risks. So the language is really clear. And same with the EU AI Act is, is those two examples.
And I think it is important to just point out. Those are laws, and if you break the law, you go in front of a judge, right? If you breach a regulatory standard, you go in front of a regulator and you could get fined, and then you’ve got standards, and then you’ve got codes of conduct. So they all have different purposes and they all attract different level of scrutiny and sent if you, if you don’t break them.
But I think the regulators specifically address third party risks in Dora. Machine identities are, are how you connect those points. I’m happy there’s some more stringent and more EU wide type regulation coming out, so you don’t have to be too selectivity is, is one set. But, uh, things like the eu a, uh, eu EU AI act, that’s not easy to say.
There’s a lot of preparation there to, to start classifying. Your AI systems, right? That’s already a mandatory requirement. You now have to look at that by risk level building audit trails for AI decision making. So we’ve got a lot of work to do around documenting your model risk. What does it do? How does it work?
Who owns it? How does it get reviewed? How is it monitored? And there’s another, there’s another regulation coming called Cyber Resilience Act in the eu. Again, that is gonna require us to have. More information and pass even more information to regulators across anyone that sells any kind of digital product.
So there’s a huge evolution of change. I think it’s good for consumers, that’s the most important thing.
David Puner: Banks are often quick to test new technologies, but slow to roll them out at scale. So what do you think financial institutions can realistically achieve in the next few months when it comes to ai machine identity governance?
Andy Parsons: Yeah, I think that’s definitely been my experience. Banks, uh, there was a great example. I was talking to someone about this the other day where, you know, the talking heads and, and someone says, you know, we could actually feed the talking heads to do work and marketing assets for us. And I remember I could go back to 2001.
We were playing with it then. So we haven’t really come a long way with some things. And there are things that have accelerated. Banks have. They have budgets. Budgets, they have sandboxes. They have. Labs that allow them to go and look very quickly and test new technologies. But you’re right, they are slow to sometimes put that into a productive environment.
My experience has been getting stuck in pock purgatory. What?
David Puner: Purgatory.
Andy Parsons: Pock. A proof of concept. Okay. You can have dozens of. The concept running at the same time, but you never, they never really convert. They never go anywhere, and you just get stuck evaluating the same thing.
David Puner: Mm-hmm.
Andy Parsons: Will we get to full transformation in the next six to 12 months?
I, I don’t think so. I think we’ll get meaningful progress that might create some sort of immediate value, but I don’t think financial institutions are great at pilots. And the challenge for all of them, like I said, is, is scaling beyond. That proof of concept in a meaningful way. We have to wait for consumer demand and we have to get consumers comfortable with the level of economy.
Regulators are always very careful to, so I have to articulate this the right way, that they’re not very good at saying yes, but equally they don’t say no. So you have to read between the lines in terms of what you are doing. But I think. The next six to 12 months, we’re only gonna get in the same way with, uh, generative ai.
I think it’ll be on the lower end, and then as people get more comfortable, it’ll then accelerate. So that’s my wisdom.
David Puner: Do you think getting machine identity security right, actually could become a competitive advantage for banks instead of them being seen typically as, or historically as slow movers slow to adopt.
Andy Parsons: Oh yeah, absolutely. I mean, you know those that invest, and I’ve seen this with a few, uh, clients that we’re working with. Now, the sooner that you start looking at this and working on real use cases, so, so let’s not talk about security. Let’s talk about the business value that it’s gonna bring, and you can, we can stop that slow moving narrative by partnering where they’re making identity.
Part of that. Business advantage. So that’s exactly why early movers will have a significant advantage here, but others, whilst everyone else is still figuring it out. So those advantages are they’re gonna come in the form of better operational resilience because everyone’s under more scrutiny now.
Faster innovation and innovation is business innovation, not technology. So it’s not getting more gadget, it’s turning your business processes into something that’s more innovative.
David Puner: Okay.
Andy Parsons: Regulatory read and we talked about that earlier. If you can get ahead of compliance requirements and do that, you know, this is the military bit coming out is like, what?
What is that? What if, what’s that? What if moment, right? What is the next problem that we’re gonna get? And then I think ultimately. Market, right? So the large banks have people that wanna invest money with ’em in the form of savings. Those deposits become money that banks can then lend, and there’s your whole ecosystem.
So the more confidence that you have in those systems, the more people are gonna have deposits, cheaper cost the capital, right? And that’s the banking bit coming out. I mean, but institutions that can demonstrate proactive security in that and build competence, identity is absolutely the way.
David Puner: Final question for now.
We’ll come back to you Andy, ’cause there’s a lot to talk about in this realm. What does success look like in the near term for financial institutions that embrace AI and machine identity governance? What milestones should they be aiming for?
Andy Parsons: You know, I’ve got a crystal ball moment here, so, so I think.
Moving from a reactive posture to a more proactive governance and being able to prove to the regulators and auditors and risk committees and audit committees who are nervous about this, but not objecting, but they will be nervous about that. So I think success will look like. How do you move from that?
Reactive, a more proactive governance stance. And I think what you do next is around inventories for machine identities and using a value assessment where we can uncover the truth of what’s going on. Get those critical machine identities and have defined owners define expiration dates. Beyond that, you could start looking at how do you automate credential rotation for noncritical systems?
So start there. So there’s less of a impact if that doesn’t quite work the way you want it to. And then I think, you know, further out, 12 months out, maybe I’d like to see AI sort of anomaly detection for machine identity behavior. Truly established where you can, and we are looking at this carefully to not have hundreds of thousands of false positives to look at because that’s a resource strain.
But we do wanna train models to be able to look at anomaly detection and have a meaningful reaction to that that helps you progress your business rather than stop it in its tracks.
David Puner: Andy, now that we’ve taken up a substantial portion of your evening, do you plan to be reactive or proactive with the rest of it?
Andy Parsons: What am I gonna do? Uh, have food? So I ran a ultra marathon on Monday. Wow. And my feet are a little bit sore, so I’m probably gonna go and put them in an ice bucket. Okay. And eat as much protein as I can. That sounds like a good idea. Yeah. Eat as much protein as I can. Uhhuh, it’s a hobby, but, uh, I could call it a proactive reaction.
David Puner: Andy, let’s get you to that ice. Yeah. Glad to hear that you’re being proactive in, uh, in your personal life as well. And, um. Thank you so much for coming onto the podcast. Really appreciate it. Absolute pleasure. But thank you very much. Alright, there you have it. Thanks for listening to Security Matters. If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop.
And if you feel so inclined, please leave us a review. We’d appreciate it very much and so will the algorithmic wins. What else? Drop us a line with questions, comments. And if you’re a cybersecurity professional and you have an idea for an episode, drop us a line. Our email address is security [email protected]
We hope to see you next time!