27 Gennaio 2026
EP 24 – FOMO, identity, and the realities of AI at scale
In this episode of Security Matters, host David Puner sits down with Ariel Pisetzky, chief information officer at CyberArk, for a candid look at the fast‑evolving intersection of AI, cybersecurity, and IT innovation. As organizations race to adopt AI, the fear of missing out is driving rapid decisions—often without enough consideration for identity, security, or long‑term impact.
Ariel shares practical insights on what it really takes to secure AI at scale, from combating AI‑enabled phishing attacks to managing agent identities and reducing growing risks in the software supply chain.
The conversation explores how leaders can balance innovation with identity‑centric guardrails, understand the economics of AI adoption, and push for the democratization of IT without losing control. Whether you’re a CIO, an IT leader, or simply curious about the future of cybersecurity, this episode offers clear, actionable guidance to help you stay ahead in 2026 and beyond.
A product team rolls out a helpful internal AI agent, nothing flashy. Just a modest teammate designed to answer questions, spin up automations, and pull data so people don’t have to file yet another ticket with IT. At first, it’s a win. Faster answers, fewer bottlenecks. Employees feel empowered. Then the agent gets ambitious, someone asks it to just take care of it, and suddenly it’s doing more than summarizing.
It’s provisioning access, connecting tools, touching critical business systems, and all at machine speed with permissions. No one remembers granting them. The whole point was self-serve, but now it’s acting like it has authority. And when something goes sideways, the post-mortem raises a strange question. Was that a person or was that an agent acting like one?
That’s the tension at the center of today’s episode. Ariel Pki, CIO at CyberArk, joins us to break down what happens when AI agents start behaving like they’re in charge. He shares how to build the right guardrails without crushing innovation and what leaders should be doing now to make today’s AI reality safer, saner, and actually scalable.
Because in the race to adopt AI, FOMO can’t be your strategy. Here’s my conversation with Ariel Pki, Chief Information Officer at CyberArk. Welcome to the podcast, Ariel.
Ariel: Thank you so very much. It’s really my pleasure to be here.
David: Really looking forward to digging into this one. So let’s talk a little bit about your background, which spans leadership, cybersecurity, and years of building and scaling complex systems.
For listeners meeting you for the first time, what’s the path that brought you into all of this and cybersecurity, and ultimately to your role as CyberArk CIO?
Ariel: So really this starts way, way, way back in the eighties when I played video games and I really wanted to have more, if it’s hit points or gold points or currency in the game itself.
And this is floppy disk era. And using hex editors, like file hex editors, to find out where that information is stored on the file and how do I increase it. So I’d say the curiosity of improving my game results was the first touch into the world of cyber.
I then became a COBOL programmer, so I’m dating myself here for all you programming guys out there.
David: Shout out.
Ariel: And from there I just fell in love with it. I really, really like it more than I do programming. So beyond the love of computing itself, it’s the love of it and helping people and organizations and businesses do the most that they can with technology.
So that was the path to the CIO position. And in terms of scale, I’ve worked on anything and everything from on-prem to cloud to very, very big private clouds to very large cloud installations.
So it’s been a fun road so far.
David: So you joined CyberArk in the fall of 2025 and today in the CIO seat, what’s top of mind as you shape CyberArk’s internal technology and security strategy?
Ariel: For me, I would say that data democratization and even tool democratization is really the most important thing. The idea that the IT department is the forced partner for the internal clients is not one that I love.
I would like to see that my internal clients and partners choose to work with me. They actually want to work with IT. They want to get services from my team. And beyond that, I would like also IT to focus on strategy and bringing value.
So if there are tasks that traditionally could happen only within IT, but with democratization can happen elsewhere and the client can be self-sufficient, then that would be the goal.
And then from there to provide the kind of surrounding services, if it be compliance or management or security or anything around that, and the connectivity, the data warehouse.
There’s so much to do in the world of IT and in the world of service for our internal clients that really, if it’s a joint effort and not just, oh, I asked IT to do this and now I’m waiting for it to finish, really changes the game for a lot of our internal clients.
David: So then digging into democratization just a bit, do you essentially mean making things more self-serve and easier, or is there more to it than that?
Ariel: Yes. So it starts with self-serve. That is the most basic concept of it. But self-serve can mean a lot of things. It can mean that you can edit something, or you can create something, or you can automate something.
If there is a task today that you repeatedly need to come to IT for it to do, and there is some way to automate it, or there is a task that you do and you would like it automated, in the past only IT had the automation tools, had the access to that technology.
Today that technology is becoming way more ubiquitous and anyone can use it. They just need the right permission. They just need the right access. Sometimes they need the right knowledge.
All of that is improving today. All of that becomes way more accessible for our clients, and IT should be the enabler, the professionals, the people that can help you around your journey, around your business needs.
And not the sole enabler, but the enabler of knowledge, the enabler of anything that you need to do to complete your task, because eventually IT is here to support the business, to support a business goal and to increase business value.
It is not here just for the fun of technology. It is here to enable some magical thing to happen within the business.
David: Looking from internal to the external big picture, from your vantage point as CIO, the world around you is shifting so fast, especially with AI. So what do you think are the hurdles that will prevent many organizations from reaping the benefits of AI in 2026?
Ariel: First, AI is super fast-paced. Until today you would see deals being closed for three, four, five years just for technology.
I’m not even talking about AI for a second. Now a lot of these things that are out of cyber, but like information systems, data systems, applications, suddenly they almost look antiquated in a very short amount of time.
The shelf life of a lot of technology, a lot of applications, is really changing and shifting. That’s one very big challenge and a way that the business in general is changing. The business of technology is changing.
Then you have additional enablers coming in from AI to BI tools that are now way more self-service to the ability to have a lot more compute at the fingertips of anyone, any citizen of any business out there.
These things together create a real change in the IT landscape in terms of how you enable access to systems, how you provide licensing, what you expect users to do with those licenses that you’ve provided them, what you expect users to achieve on their own.
There’s a whole lot happening out there. Really to the extreme, it almost seems that if a system is not a system of record, if you’re SOX compliant, if you’re a listed company or just for correct accounting, if a system is not a system of record, it’s really at high risk.
And even systems of record are now being challenged in terms of they’re not fun, they’re not easy to use.
And organizations want them to be fun, easy to use, or at least users want them to be fun, easy to use, self-explanatory, and give me control and give me guardrails. And within those guardrails I will be able to actually find my own way.
David: What about the economics? How should CIOs think about the economics of AI when hardware scarcity and cloud GPU pricing are so imbalanced?
Ariel: That’s a good one. For now it seems that AI prices at the end-user enterprise type use of AI are, while expensive, still cheap because they are subsidized at a certain level within the companies providing them, just because there’s this crazy race to get as many users as possible into your ecosystem.
Having said that, you see these crazy investments happening. And everyone is talking about the investment that the giants are doing in hardware.
You need to pause there for a moment and recall that that investment is not only in hardware. To actually use that hardware they need to invest in power and in space, meaning data centers, meaning the ability to feed that hardware with enough power to run it.
So there’s a crazy arms race now happening between the giants of the world that are investing hundreds of billions of dollars in power, in space, and in GPUs or in hardware, which is creating scarcity.
Pushing prices for other types of hardware way up. Just memory as an example. At the end of 25 and now beginning of 26, memory for general compute has gone way up, way, way, way up.
Just because it’s all being sent to the giants that are going AI crazy.
This is impacting a whole lot of things in AI and IT in general, and that’s taking it really, really top level and thinking about the very big picture.
And if an organization wants to use AI not only for corporate use as in LLMs and what today we almost call simple use cases of help me with grammar, help me with summarizing files, help me with drafting an email, but wants to take it to the next level to bring it into their product, that suddenly becomes an issue of cost.
And the ability that a provider of software, a provider of a service, to bring in AI at a reasonable price is a real, real, real challenge, not only of cyber, but also of classic IT work in terms of finding hardware, installing hardware, running that hardware, or going to the cloud where it’s super expensive to run it per token.
Because that is still really controlled by the three big clouds.
David: So is this a new challenge or is it just the scale of it is much bigger than it was prior to AI?
Ariel: It is a new challenge. The scarcity of power now has become a new thing.
You used to be able to easily find places to host your servers for anything you wanted to do on-prem, and you see that cloud prices at the beginning of the cloud era constantly reducing.
Cloud has now not only stabilized in price, but even going up.
Ariel: But with the world of agents, you are putting an agent there that has autonomy. With that autonomy comes great responsibility. With that responsibility, do you want to give it to the agent or do you want to have some level of control? Identities give you those controls.
Adding to that, the proliferation of agents means that you have way more identities.
Again, looking at classic code, an application had one user, had a connection string, maybe three or four identities, connection strings, API keys associated with it. Each agent now will have an identity associated with it, will have the user interacting with it, an identity, maybe will have pass-through authentication.
This gets messy.
And then how do you know that a user took an action or an agent took an action? You want to make sure that you have control over all of that. You want to make sure that you know when it’s the actual user, when it’s an agent. You want to maybe have control over the pace that things happen.
You want to have rate limiting over your agents. You want to have that at the identity level. You want to have that at the user level.
Where do you do it?
So there’s a whole lot that has to happen there. And identities are really the big playing field where a lot of it will happen.
David: What’s an unintended consequence of agent systems that leaders generally aren’t thinking about yet?
Ariel: Unintended consequence is a good one.
If you think about an SRE, a site reliability engineer, and they are now working on a production system and they may be using one of the common tools that are out there today, when do you know if it’s the user and when do you know when it’s an AI agent running on behalf of that user?
Let’s say things happen and something went wrong. Did the user learn from it?
I’m not talking about firing anyone. I’m talking about accountability.
Can the user easily take accountability, learn from the mistake, and make sure that it doesn’t happen again?
Because an agent, unless it’s a machine learning algorithm that has some type of ability to learn, it’s just an agent and it doesn’t know that something went wrong and you don’t know to associate the issue with the agent.
How do you teach it for the next time?
So that’s one thing.
Another thing that can happen when organizations develop their own AI solutions, I’m not talking about the algorithms, I’m talking about chatbots or other things, because you get so many options, you get so much freedom for that agent or solution to run within the world of interactions.
Then you might get interactions that you did not plan for, or answers that you did not plan for, or situations where the system is used in a way that you did not intend to have it be used.
So those unintended consequences really can happen from a plethora of places.
And you want to make sure that you have at least visibility into what is human, what is not, how fast is it moving, and how fast can I detect any issues should they happen.
David: For some reason, I have a feeling that the mantra of your team is not, things happen.
Ariel: We want to make sure that the least amount of things happen. That is true. But it’s not if they happen, it’s really when they happen, because things will happen.
Bad things, be it downtime, bad SLA, those are incidents that happen.
So the world of IT is such where you want to always be learning. You already made that investment. I mean, you had the problem, it happened. This is now downtime. So it has a monetary value. It’s bad value.
You now want to make sure that that value doesn’t go to waste. It’s a learning experience.
If you know to learn from it, you can better yourself. You can be a better IT group. And that doesn’t matter if it’s an AI or a human. It’s just learning from experience and shooting for excellence.
David: If every AI agent becomes an identity, how should organizations redesign permissioning and guardrails?
Ariel: So when you kind of build the ability to have an identity for every agent and you have the ability to have just-in-time access and you have the ability to limit the world of an agent but give it also permissions where you need it, that is a guardrail within itself.
It also provides you with the controls that you need in place to make sure that AI doesn’t just run freely within the organization and do anything it wants.
You want to make sure that it’s not misused.
So that’s guardrails. Those are controls. Identity is a solution within itself. It’s control. It’s the ability to control the actions of AI.
Identities will be really at the heart of any control.
And beyond that, the classic paradigms still apply. You don’t want static tokens. You want just-in-time access. You want to make sure that your code pulls that permission, that authentication from a vault, from a special place.
You want to make sure that you are in control of when and why and how much anything and anyone in your organization, agents and humans, have access to any given system.
David: As we’ve established, everything is moving quickly.
Ariel: Yes.
David: Equally moving quickly to all of this is the evolving threat landscape. When you look at how organizations evaluate phishing risk today, for example, what do you think they’re getting right and what are they missing and what would you want them measuring instead?
Ariel: First, let’s say that AI has impacted phishing in a crazy way.
If two years ago you would say, oh, it’s easy to detect phishing. They have endless mistakes, be it grammar, be it language off color, off context, things like that.
AI has now made phishing way, way, way deeper and stronger.
And a lot of organizations go and have these phishing tests and make sure that people don’t click.
And the problem there is that it doesn’t really matter how many people clicked unless it was zero. Any number beyond that is a problem.
David: Right.
Ariel: And solving that is done only with really strong authentication.
And when I say strong authentication, I don’t mean just run-of-the-mill multifactor authentication, MFA or 2FA. I’m talking about FIDO-compliant authentication, where you want to make sure that credentials that you use that are entered into anything cannot be stolen.
You might also want to make sure that you’re not testing only for, let’s call it run-of-the-mill phishing.
Run-of-the-mill would be the attempt to gain access through the theft of credentials.
The other types of phishing where you have the attempt to maybe glean intelligence by asking questions about the organization, asking you to fill out a form about maybe your operating system, maybe what you have installed on it, other things like that.
And you also want to make sure that you deal with the now, again, AI-enabled new type of phishing where you have fake users that just look real because AI made it so much easier now to create videos and to have a conversation with a fake face.
David: Right.
Ariel: And a fake video.
You suddenly have social engineering that the phishing has moved into a way deeper field of social engineering and made social engineering that much simpler.
And again, the only way to deal with that is with good, strong authentication.
David: So then just digging a little bit more into that authentication, you had talked about FIDO-compliant authentication or FIDO-based options. What does it take for authentication to be truly phishing resistant today?
Ariel: So the FIDO standard comes there to really make it phishing resistant.
Ariel: It doesn’t mean that the user won’t make a mistake or will not provide credentials. It just means that these credentials cannot be used with a man in the middle. So they cannot be used by a third party to do anything. It’ll nullify that option.
So the need is for the organization to be really vigilant and not allow any IT service of any shape or form to be just plain old authentication. It has to be FIDO-compliant, and that is the only way that you can really make sure that you are at a high-resistant level, to a perfect-resistant level, of de-phishing.
But it starts with FIDO-compliant authentication and then moves on to a few additional controls for other vectors of attack that we have not mentioned here.
David: About those other vectors of attack, what’s the craziest LLM-enabled attack vector you’ve come across lately and what does it suggest about the future?
Ariel: The craziness I’ve seen is really the use of LLMs to fast react to something a user posted, have really convincing organizational language around what a user posted or the organization posted, and learning the organizational lingo and going after a spear phish.
So in terms of phishing specifically and spear phishing, those attacks now are LLM-enabled, and it’s just crazy to see how fast they turn around and how deep they go and how good they know the user, the user population, and your organizational lingo.
The other AI type of things that we see today, the craziest I’ve seen was the use of video where it was a combination of phishing, but not email phishing. It was an attack based on other mediums of communication, phone-based or WhatsApp-based or Instagram-based kind of things, where the attacker impersonated a high-ranking person in the organization.
That high-ranking person reaches out to you via an external medium, not a corporate medium. They use the LLM to have the right language, the right tone and everything. Then when you actually respond, they will use AI-generated video, AI-generated voice to communicate with you with voice messages and with video, and it’s just really, really hard because you’re getting this high-ranked person reaching out to you, and maybe it’s someone that you’re not in close contact with and they’re requesting social engineering, but it’s backed with phishing, multiple vectors, and a lot of technology.
And for the attacker, the cost isn’t really high. Yes, it’s time-consuming, but it’s way less time-consuming than spear phishing was in the past. And with a few dollars, they are able to do a whole lot of social engineering that was just not possible a year ago.
David: And it sounds like it’s relentless. If they’ve got eyes on you as a target, they can just keep going and going and going until you finally click or respond.
Ariel: They can keep going and going and going and use different attack vectors and use different attack methods and use different personas and use different tactics.
The whole idea is to have the right controls in place, to have the right procedures in place where corporate communication must happen on corporate tools. After it happens on corporate tools, it has to be authenticated with the right tools in place, and the use of those tools must be limited to the correct identities, FIDO-compliant identities and environment.
So if all the incidents that happened over the past few years, the attacker is able to get a foothold or even a toehold into the organization, use that to masquerade as another user and elevate their capabilities or elevate their permissions, or just use that specific environment to get additional information to help them with their next level of attack within the organization.
David: So then from AI-enabled attacks or AI-enhanced attacks, really, there’s also of course the human side of all this vibe coding. Where do you stand on citizen developers and more specifically, with AI lowering the barrier to building apps and automations, what does a safe, well-governed approach to citizen development look like inside of an enterprise?
Ariel: Well, it almost brings us full circle. We started with democratization, and citizen development is the, I don’t want to say ultimate democratization because it’s one facet of it, but it’s absolutely an important facet of it.
Vibe coding or citizen coding is a type of form of democratization, be it automations, be it full-out reports, be it the automation of procedures within the organization. That’s all wonderful.
Controls and how do you approach that in a way that allows it in a really controlled environment?
So you need to choose your framework. For me, I would say that making sure you’re not using multiple tools from different vendors to do different pieces of that, because then you start having issues of integration in the ecosystem and security controls between the different systems and how do you limit them.
So for me, I would say it’s choosing an ecosystem, working within that ecosystem with the tools that are given, and start by limiting the amount of people that have access to it. Not because IT wants to control it. It’s just because you want to learn. You want the learning curve. You want to understand what comes out of it, what are the things that you need to control.
For example, if you have an automation that goes and hits an internal database, that’s okay. But if it hits it 20,000 times a second with 50-hour queries, that’s a problem.
So you need to understand, do you give access to internal systems from that ecosystem or not? If you do, how do you rate limit that access? How do you provide the right authentication, the right context into that system? Is it read-only? Is it read-write? If it’s read-write, is it read-write to anything, to just something? Is it all the time? Is it only some of the time?
These are things that you need to learn, and there is no one correct answer for all organizations. This is different for each and every organization, for the risk appetite and for the control environment that lives within that environment and for the culture of that environment and kind of IT department.
So I would say start small. Start with a limited group of people, but do start. Make sure that you have an environment that you can track to start with. Learn from issues that come up or from requests from the users of how you can provide more and more connectivity and what would be the right connectivity level that is good for you and you’re comfortable with providing that you won’t have unintended consequences with.
And eventually make sure that the users know that they can do things on their own. The more users you have that can do things on their own, the more IT can do strategic things for the organization.
David: So there may not be one correct answer for this either. But generally, how can leaders balance productivity and innovation with identity-centric guardrails?
Ariel: That is true. There is no one correct answer for all. Yet it is super clear that when you have the identity guardrails, you have a lot of the solution already in place.
If you have the ability to read-only information and not just it’s always read-write, you’re already at a point where you’re saying, okay, worst case you got to read information, but you didn’t change it.
I’m not saying again that’s good because you might have read information that’s PII. But if you have a good controlled environment, you’re going non-PII information, read whatever you want. You cannot maybe create reports that are then a system of record because it’s still untested.
And for a system of record, for something that is regulated, for something that goes into financial reporting, you will need IT or you will need other IT guardrails.
But for anything in the middle, you’re good to go. I allow you to access something with read-write, but you can only change specific fields in specific locations. This isn’t free-for-all.
So identity really provides you with the ability to limit what the agents can do within the organization. After that limiting factor, it gives you also traceability and it provides users with a safe environment that if their agent goes wild, you can easily trace it, shut it down, reduce permissions, and respond to it.
David: You’ve already given a lot of actionable insights and suggestions. Is there anything else that fast-moving organizations bringing AI into their workflows should be considering now and what does responsible adoption actually look like?
Ariel: Considering the wish to adopt AI fast, I would say everyone wants to adopt the best AI out there and FOMO is exactly the motivating force.
You want to make sure that you don’t commit to long contracts with AI providers yet. Start with one. It doesn’t matter, but start with a short commitment.
Move from that to measuring what matters, which is outcome. Not how many users are using AI, but what impact did it have on the organization.
People will say, oh, I must have this or that AI widget. Are you saying it saves time? Yes, it saves this gazillion amount of time.
Okay, so you don’t need to hire next year.
Oh no, I still need all these people.
So what did the AI agent do for you? What did it bring? What value?
So try to measure return on investment. Try to measure time to market, time to impact, time to perform tasks, be it whatever it is. It needs to be measured with the business impact in mind and not with just we’re doing AI for the sake of AI because everyone’s doing AI.
That’s not the reason you do AI. It’s because there’s a business reason behind it. It’s because it has a business impact. It betters something there.
Think of it almost like a spell-checker today. You wouldn’t fathom working without a spell-checker. You wouldn’t go back to, okay, I’ll check all my spelling with a dictionary, because that’s going to take too much time.
So you’re going, okay, I’ll use some method of technology that will check my spelling. It can be, by the way, AI, but it can be just a spell-checker.
So that is a very clear time-saver. But how does that now translate into business return on investment?
So you’re saying instead of having this floor full of editors that now check every marketing email that I send out, I can write my own marketing email. I can give it to an AI to check my grammar, to make it sound American English or British English or whatever English that I want.
I’m confident enough after I review it again that it is good. I can maybe have less of a B or C time investment and my time to market, my turnover time for an email, is way faster.
So the measurement is the business impact and the business outcome.
David: What’s the equivalent of a one-year contract with an AI vendor in dog years?
Ariel: That is a wonderful comparison.
I would say that a one-year contract is already a long contract. If you’re just starting out, if you have already started on your journey and you have a good idea of the ecosystem you want, then committing for a full one year with that provider makes absolute sense. And if you are extremely sure, 18 months.
You need just to make sure that you document really well what works, what doesn’t work for you in terms of agents, ideas, things that you have done.
Because in today’s amazing world of AI agents, AI providers, AI-gen solutions and so on, the idea is what counts. The execution can be done in different systems. You might need to replicate your execution, you might need to convert.
Those are all doable things. But the idea, if it actually works, it will work on multiple platforms. You just need the time to convert.
And there might be benefits of cost or there might be benefits of improved algorithms within a different platform.
So document what you did. Document what worked best, and the idea lives on.
David: How should organizations rethink cloud and SaaS risk in a world where the perimeter is gone and software supply chain risks are growing?
Ariel: Oh, so that is a very, very large topic to cover. I would say a few things that make sense to me at least.
Supply chain is a huge issue. It’s not changing. It’s there with SaaS providers, it’s there with open source, it’s there with closed source that uses open source. Supply chain is a huge conundrum that needs resolving.
Any organization needs to really be very responsible with creating a software bill of materials, with creating a good map of what is used and is not used within the organization.
And then let’s go into AI for a moment and use AI tools just to kind of create this list or create alerts of: this is my list of software that I’m using. Have there been alerts in the last 24 hours for any of my third-party supply chain? Has something happened? Do I need to take action?
Just as a fun example of an agent that someone can automate and write within an organization to provide good intelligence.
There is no one good solution for supply chain attacks. It has become a very big issue in the world of cyber. It’s definitely in the top 10 of most CIOs that I speak to.
So the idea is that you want to have controls, you want to have alerts, and you want to make sure that you’re monitoring the internet out there and the situation to make sure that you know when you have a problem with third parties.
That being said, you also want to limit your exposure as much as possible.
So if you’re thinking of any SaaS solution that you’re using, any plugin into a SaaS solution that you’re using, any infrastructure solution, cloud solution that you’re using out there, if you can run the right type of permissions for the third parties, for anything that you add that can impact your data, be it through permissions, be it through identities, be it through configurations.
There are different types of solutions for different types of technologies that will go a very long way in reducing the exposure.
So if you’re thinking of ERP systems or CRM systems that might have 20, 30, 40 plugins that can access them, there is first the platform security itself. Let’s say that is a solved problem.
But now you have added 30 plugins that introduce additional risks and you need to manage them.
You need to maybe provide them access just in time. Maybe you need to reduce the amount of access that they have all the time to only the specific fields and not give them admin access to your platform.
So there are a lot of things that can and should be done there. And AI can help you actually find those.
So make sure to utilize agents and make sure to document what you have so that you can react in time. With some luck, react before time.
David: Ariel, I feel like we’ve solved all these challenges, and so let’s move over from supply chain attacks to something a little more personal.
To wrap up the episode, I know you’re deep into martial arts training outside of work. What draws you to that and does it give you anything you bring back into your CIO role?
Ariel: That is so true. I don’t know where you get your intelligence from, but that is human intelligence.
David: Oh, we’ve got a crack research staff here.
Ariel: Research staff did their job.
David: Yeah.
Ariel: Sometimes I come black and blue to the office, which is actually a great conversation starter.
But actually I’d say that from martial arts you bring in a lot of humbleness. It’s a one-on-one contact sport and you’re in charge of your own game.
So you get humble pie every day that you practice because there’s always someone better. There’s someone to learn from.
And I bring that with me into the office every day. Make sure to be humble and to listen and to learn from any interaction that I can.
And I’d say that the second thing I bring in is the ability to hear and react to feedback.
Any sport that you are trying to better yourself, you get a ton of feedback and it’s always personal. It’s always you.
And once you get that continuous feedback of, oh, you could do this better, you could do this better, you could do this better, you can bring that into the office.
And then any interaction with feedback becomes, wow, I can better myself. It’s not that I’m a bad person or doing something bad. It’s just that I can’t see myself from the outside.
Only others can see me. Others can see me if they feel comfortable enough and provide me this gift of feedback.
And I can actually pick it up and take it and better myself. What a wonderful thing that is.
David: And you’re working towards a black belt, is that right?
Ariel: Yes. I’m working towards my black belt, hopefully soon.
David: Black belt in what is the particular area of martial arts?
Ariel: Oh, it’s mixed martial arts. So it means that I’m doing anything from Brazilian jiu-jitsu to Krav Maga to karate to judo.
It’s really mixed martial arts and it’s crazy the amount of, I’d say, mental elasticity that is needed there because you shift from tactics and strategies and positions.
It can be striking, it can be groundwork, it can be grappling. It’s a whole world of mixed martial arts.
David: And you’re grounding yourself there in something that is very low-tech outside of work. I like that.
Ariel: It’s very low-tech. And the fun part is maybe this didn’t come up in your deep research.
I’m actually doing it with my family members so we get to have fun, good time together, practice something together and have a very bonding experience together.
So hopefully we’ll actually also do our black belt together.
David: And bonding over those black and blues as well, I imagine.
Ariel: Bonding over those black and blues.
David: Absolutely. Ariel Pki, CIO at CyberArk. Thank you so much for coming onto the podcast. This has been really, really great.
Ariel: Oh, it was an immense pleasure for me. Thank you so very much.
David: All right, there you have it. Thanks for listening to Security Matters.
If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop.
And if you feel so inclined, please leave us a review. We’d appreciate it very much, and so will the algorithmic winds.
What else? Drop us a line with questions, comments, and if you’re a cybersecurity professional and you have an idea for an episode, drop us a line.
Our email address is [email protected]
We hope to see you next time.
And a lot of organizations, the first question is how do we get our cloud spend under control? And now you also want AI.
AI is going to push it even further because that’s like an open faucet that just runs water, tokens, and those tokens run with little control over them.
What you could do on-prem cheaper, or what you could do yourself, has now gone further away because it’s that much harder to find hardware to do it economically, to find space and power to host that hardware, and so on.
So it’s become a new profession in the IT world, although it’s a very old profession of data centers.
It almost becomes a new profession of finding, brokering, helping find location in different data centers and tying all that together to something that a business can actually use and utilize in a cost-efficient way.
David: Is there a major common strategic mistake you see leadership teams making when planning for AI?
Ariel: That’s actually a very interesting question. If you’re thinking of AI strategy, it’s almost like everyone says, oh, we have an AI strategy. We want to do an AI initiative. We want to do something.
But then get bogged down by limitations. Oh, we were afraid. We cannot do this. We cannot do that.
And while there is a lot of concern and a really genuine need for controls to be put in place and for cyber restrictions to be done correctly, on the other hand there is a lot of fear, uncertainty, and doubt.
There’s this common conception that LLMs can keep no secrets or that, if you would like, AI keeps no secrets.
And we continuously get demonstrations of how AI is being used in unintended ways. So that creates a lot of fear and holds back a lot of the organizations.
So I would say you need courage. You need to have the cyber plan together with the AI plan and not as an afterthought.
And you need to understand that there are different use cases. It’s not one solution for everything and everyone. There are different solutions. There are different things that you need out there.
And of course what is right today might not be right tomorrow. We touched upon the pace that technology is now moving at.
David: Right.
Ariel: And that crazy pace means that if you have done something today, it could be double as good tomorrow, or there’s this FOMO, this fear of missing out, where you’re doing something today and then suddenly the competing solution has this amazing widget and you’re going, oh, I should have done it with the other solution.
And now I’m tied to it.
So steady on the course. I’d say the major things are be bold, have a cyber strategy coupled with your AI strategy, and don’t out-FOMO. Don’t act out of the fear of missing out.
David: That is a really good point, and I don’t think we’ve had anyone talking about this technology in that way, and I think that’s very true.
Fear of missing out is definitely a compelling factor here.
Ariel: Absolutely.
David: And once organizations start unlocking AI at scale, things become complicated fast, especially with agents. So how do AI agents fundamentally change the identity landscape in 2026?
Ariel: So when you think of an agent, you think of this almost simple piece of code that just runs a task.
An agent also sounds like a service agent, or maybe it conjures these fun images in your head.
But the reality is that you have a non-human identity or a non-human piece of code running that has extreme autonomy beyond what we have been used to that can do many things within your organization.
Let me try to simplify that answer.
When you have a human running tasks, they run at human speed. They have some level of self-control, as long as it’s not a malicious user, of course, and you have a set of policies and comprehension that the user might have.
When you’re talking about an agent, the agent is now a piece of code, or if it’s a solution that has many agents talking to other agents and a chain of agents that are working together, you suddenly have a situation where in the past you would have a code path and that code path would have a decision tree.
And that decision tree many times was definitive, and not only definitive, it had a very easily traceable number of options that it could go.
It could have bugs. You might have not thought all of them through, but it had a finite number of choices.
Now with a world of agents, you suddenly get this way more complex, much harder to map tree of choices.
It can go in multiple directions, not all of them you have actually planned for in advance, and that makes a huge difference.
It means that you have an agent that you give a specific task, but it’s a prompt. It’s maybe additional instructions.
It might be a huge long manifesto of a prompt, but at the end of the day it’s a free agent that might now get a different prompt, an additional prodding from the user.
And that’s why when we talk about cybersecurity of agents, we talk a lot about guardrails and we talk a lot about identities.
Making sure that on the one hand you have controls over what the agent is supposed to do at the agent level, and then at the identity level you want to make sure that the agent doesn’t have just any permission to do anything out there.
I’ll give you a fun example.
If you are now writing an email, then you’re in control.
But if you’re telling the agent, be it any of the products out there, let’s not name any one of them. They’re all great.
David: Yeah.
Ariel: And you’re saying, please write an email with this, that, or the other.
I do not think we’re at the point where you’re saying, and send it when you’re done without me reviewing it.
David: Right.