diciembre 16, 2025

EP 21 – When attackers log in: Pausing for perspective in the age of instant answers

In this episode of Security Matters, host David Puner welcomes back David Higgins, senior director in CyberArk’s Field Technology Office, for a timely conversation about the evolving cyber threat landscape. Higgins explains why today’s attackers aren’t breaking in—they’re logging in—using stolen credentials, AI-powered social engineering, and deepfakes to bypass traditional defenses and exploit trust.

The discussion explores how the rise of AI is eroding critical thinking, making it easier for even seasoned professionals to fall for convincing scams. Higgins and Puner break down the dangers of instant answers, the importance of “never trust, always verify,” and why zero standing privilege is essential for defending against insider threats. They also tackle the risks of shadow AI, the growing challenge of misinformation, and how organizations can build a culture of vigilance without creating a climate of mistrust.

Whether you’re a security leader, IT professional, or just curious about the future of digital trust, this episode delivers actionable insights on identity security, cyber hygiene, and the basics that matter more than ever in 2026 and beyond.

David Puner: You are listening to the Security Matters podcast. I’m David Puner, a senior editorial manager at CyberArk, the global leader in identity security.

A seasoned IT manager was having one of those days. Reports due back by noon, a grinding commute among the no-signalers — you know who you are. Back-to-back meetings, deadlines slipping, and a steady stream of fires to put out. There was barely a moment to think. Just another day.

Really late in the afternoon, our IT manager finally settled into a brief moment of calm.

That’s when the inbox lit up — an urgent email from his daughter’s school. He quickly scrutinized it. The logo looked right. The wording sounded right. But after a day like this, and with an email this convincing and personal, his usual caution wasn’t enough.

He clicked.

That’s all it took. No malware alert. No dramatic breach. Just an AI-powered piece of social engineering exploiting the very trust and mental shortcuts we develop when relying on instant answers.

As our guest today puts it, anyone can fall for social engineering on a busy day — especially as AI tools make it easier to trust quick answers and harder to pause for critical thinking.

That notion is at the heart of my conversation with David Higgins, senior director in CyberArk’s Field Technology Office. He’s been tracking how our growing reliance on AI is reshaping attacks — making them faster, more reliable, and more likely to succeed.

As we let our guard down today on Security Matters, David explains what these shifts mean for identity trust and the security foundations organizations need.

Before AI tools scale across the enterprise, tap the brakes and use your signals, please. Let’s get into it with David Higgins.

David Puner: David Higgins, senior director in CyberArk’s Field Technology Office. Welcome back to the podcast.

David Higgins: Thank you very much. Glad to be back. I can’t believe it’s been so long since we last spoke.

David Puner: Yeah, it is kind of crazy. The last time you were on was January 2023, when the podcast was called Trust Issues.

What’s been going on in the Field Technology Office this year? You guys have been busy.

David Higgins: Pretty busy — busy. I think every year there’s always something new that’s a challenge for our customers and the organizations we work with around identity.

The big topic, as you can well imagine — and no doubt you’ve had a few guests talking about this as well — is the impact of AI.

David Puner: Oh yeah. Just one or two.

David Higgins: One or two, right? And I think in the identity space, everyone’s just trying to wrap their head around it and understand what it means. So yeah, a lot of conversations around that over the last couple of months.

David Puner: Yeah, and I look forward to speaking with you in depth about that as we go here.

But to bring it back to what we were talking about in January 2023 — which, incidentally, was one month before this whole generative AI thing really started en masse, or at least before everybody became aware of it — you and I were talking about securing critical infrastructure and nation-state attacks.

Now, almost three years later, what major shifts have you seen in the cyberthreat landscape that you think are most worthy of note?

David Higgins: Look, I’m sure we’ll definitely talk more about the implications of AI for sure. It’s something we weren’t talking about too much back then. As you say, generative AI was certainly getting attention, but the cybersecurity aspect not so much — and we’re certainly seeing that now.

Cybercrime was still very much present, and it’s become a fully fledged industry. Different attackers are breaking down their specialisms and coordinating in ways perhaps not seen before, posing different challenges.

Social engineering continues to be a focus, and identity — that’s no different than what we spoke about a few years back.

I hear the cliché term — and I apologize for using cliché terms — but I hear this a lot from security leaders now. The way they see it, and the way they worry, is that people aren’t breaking into organizations anymore — they’re logging in.

I don’t think I used that term before, so that’s something new since we last spoke.

David Puner: All right. Well, we waited three years to get that, and I think it was worth it. Thanks for coming onto the podcast. Great speaking with you.

So back then, January 2023, critical infrastructure and nation-sponsored threats were front and center. Do they still top the list, or have things like AI-driven misinformation and insider risks related to AI taken over as the critical threats organizations worry about most?

David Higgins: It’s still there for sure. Again, the socioeconomic climate that we talked about in 2023 is still there — if not, things have probably gotten a little murkier. There’s a lot more tension in the world than there has been, and that puts critical infrastructure directly in the crosshairs for sure.

There’s a lot more regulation and a lot more pressure on organizations in that space to beef up their cyber defenses. The world is more connected. We’ve seen a lot of examples. But what AI and insider threats have done, if anything, is just heighten that risk — for critical infrastructure providers, but also for all industries across the globe — bringing different challenges for defenders, attackers, and the attack landscape, as well as the insider threat.

We’ve seen that play out quite heavily in 2025, and I think we’re going to see a lot more of that next year too.

David Puner: AI is making it easier than ever to get instant answers, but you’ve said elsewhere that it comes at the cost of eroding critical thinking. How could AI tools actually make people less inclined to double-check what they see?

David Higgins: There are a lot of examples of this. For those listening, just Google big mistakes or misinformation provided by AI — you’ll see a load of examples.

What it really comes down to is too much trust in the results it provides. We’re all using it, right? We’re all using it to help generate documents. I think heavy usage of it is almost acting like another abstraction layer in internet searches. We used to go directly to search engines and look for what we were searching for. Now we just go to AI, ask it a question, and it goes off, trolls the internet, trolls what it already knows, and provides some answers.

A lot of people take that at face value rather than validating the sources it’s come from. And that’s part of where I was coming from in that earlier statement you mentioned. People are taking what it says at face value, and we’ve got to remember AI doesn’t really understand what we’re telling it, nor what it’s processing.

So we have to be a little cautious when we regurgitate and reprocess the results it sends us.

David Puner: How do we do that? How do we, as a whole, know that that’s part of the equation and do that extra work?

David Higgins: It reminds me — sorry to give you a boring story — of doing history at school. When you’re taught about things that happened in the past, many times ancient times, they tell you that to validate history, you need multiple sources. You validate facts from differing opinions and different media.

I’m not saying you do that for every single thing, because people will say, well, what’s the point of using AI then? But you do need to be cognizant of the sources it’s pulling from, and you need to ensure more trusted sources are being leveraged in your search context.

You can provide prompts to AI and say, “Could you validate this against this government site or this organization?” rather than blindly letting it troll whatever sites it finds on the internet. Quite often, it trolls through Reddit and starts regurgitating people’s opinions as facts. That’s the worrying area — opinions becoming facts simply because that’s what it finds.

David Puner: From a security perspective, why is the erosion of skepticism so dangerous? Phishing is still one of the top ways breaches happen, and now attackers have AI to whip up deepfake emails, voices, and videos that look totally legitimate. Are we basically at a point where anyone could fall for a well-crafted scam?

David Higgins: If you’d asked me this question three or five years ago, I would’ve said the same thing — anyone, on a bad day, can fall victim to social engineering. We’re rushed. We’re bouncing between things. We were talking off air about how I went from one webinar and then had to quickly go pick up my kids and my dog. You’re rushing around, and that’s when your guard is down.

On any particular day, someone could fall victim to social engineering, even without AI. The worrying thing is that AI heightens this because it can be much more crafted.

I remember a few years back a security leader telling me they’d received a well-crafted social engineering email that looked like it came from their child’s school. This was pre-generative AI. What parent isn’t going to check that email or open that attachment?

That’s exactly what generative AI brings to social engineering. You can gather as much information as possible about an individual and craft something that’s not generic spray-and-pray, but absolutely nailed-on with highly specific details. And that’s what we’re going to see.

We’re going to see an increase in the success rates of social engineering, which is, of course, a worrying aspect for cyber defenders.

David Puner: Without a doubt, malicious actors can flood the internet with false narratives that AI platforms might pick up and inadvertently amplify. How do you see that playing out in the near future? Could a company’s chatbot end up spreading misinformation that an attacker planted?

David Higgins: Definitely. If you look at the more generally accessible AI tools, my concern is where they’re getting their data from. They can turn opinion or nonsense into fact and say, “This is how you solve that problem.”

We’ve already touched on this. There was the case with the lawyers in the US who used AI to validate their arguments, and the cases it cited were completely fictitious. They didn’t exist.

Then you’ve got organizations building internal AI tools based on their own data sets. That’s another area of concern. A lot of the AI conversation focuses on how attackers use it, or how defenders use it. But we also need to consider AI itself as a victim.

If an attacker gets into an organization’s environment and manipulates it — poisons the data, hijacks the system — then that chatbot could start sharing things, saying things, or encouraging actions it absolutely shouldn’t be doing.

David Puner: I want to ask you about how organizations can counter this threat, but before that, to stay on this thread — when talking about false narratives and the potential for AI to spread misinformation — what are the ripple effects on organizations, or the potential ripple effects?

David Higgins: Yeah, look, I think it’s broad. The risks are pretty broad, right? From the obvious reputational damage all the way to loss of IP. What are you allowing these AI bots, agents, et cetera, to query and access? And if they’re revealing information they shouldn’t, are we losing IP?

If it’s producing misinformation, we’re talking about brand damage. We’re talking about loss of revenue. I believe the implications are pretty broad in terms of the impact this could really have.

The way I look at it is there’s been a lot of talk in cybersecurity about supply chain attacks — something we’ve needed to be concerned about. Our investment in AI has to be something we look at through that same lens. 2026 could be the year AI is depicted from an attack perspective and what those impacts could be if it starts saying things it shouldn’t be saying or revealing things it shouldn’t reveal. That has to be on the radar for a lot of security leaders right now.

David Puner: So a lot of hypothetical scenarios, a lot of unknowns, but also a lot happening right now. How should — or can — organizations counter this threat? Is it about training employees to stay skeptical and verify sources, or are there technical steps like AI content detection or stricter verification steps to make sure employees don’t blindly trust every AI-generated answer or realistic-looking email that lands in their inbox?

David Higgins: There is absolutely a user education piece that’s needed. As I’ve said, there’s almost too much trust in the results people get from AI interactions. There’s a willingness to trust what it does.

On one hand, it’s great that people are willing to embrace new technology, but at the same time, there’s probably too much trust in it. So user education is important — not just talking about the risks, but helping people truly understand how it works.

AI, generative AI, large language models — they’re algorithms. They don’t truly understand. They don’t actually have the context of what you’re asking. Getting people to understand what they’re interacting with, what value these tools bring, and where their limits are is critical.

That awareness should trigger the instinct to validate sources. Most of the time when you interact with Gemini, Claude, ChatGPT, or whatever tool it may be, you’ll see a link or chain icon showing where the sources came from. Before taking that as fact, validate it against a trusted source. Most .org sites are generally reliable to an extent. Nonprofit organizations are usually good places to validate information.

Just double-check your homework. There’s also pressure on organizations internally as they embrace AI for optimization and better customer interaction. They probably need to do a better job of highlighting where these sources come from. It’s also an interesting area for marketing teams — how AI responds positively about your organization compared to competitors.

David Puner: Interesting. So you mentioned trust. You’re suggesting we shouldn’t take any of this at face value and that we should double-check all of it. Are you saying organizations and employees should practice zero trust when it comes to information they’re getting from LLMs, or is that even reasonable with the amount of queries and data flying around?

David Higgins: I like that, because the tagline of zero trust is “never trust, always verify.”

David Puner: Never trust, always verify.

David Higgins: Yeah. And I think that absolutely applies. AI is a great tool for drawing through huge volumes of data, condensing it, and putting it into a digestible format. But it should always come with verification — where did this data come from, what sources is it pulling from?

We’ve already seen examples, like the legal firm that took AI-generated information at face value and used it in a legal proceeding. That obviously backfired massively. They were in the position of saying, “We didn’t realize it could hallucinate or fight misinformation.”

When it’s trawling through opinion-based platforms like Reddit and regurgitating that as fact, that’s where the danger lies. I like the phrase “never trust, always verify,” though maybe just “always verify” because people get scared by the “never trust” part. But “always verify” is a good tagline.

David Puner: Clicking the links in those LLMs when they suggest them — you want the link to be something you trust. But even if it’s a trusted source, should you be clicking those links?

David Higgins: It’s a good point. We talked earlier about attackers manipulating data so AI says or reveals things it shouldn’t. If the result isn’t just misinformation but inappropriate links, that’s another concern.

Due diligence is absolutely needed. From the defender’s side, there’s always that balance between educating users to always verify and assuming people will still do the wrong thing. They’ll click the wrong link. They’ll open the wrong attachment.

So how do you make sure one mistake doesn’t expose the rest of the organization? That comes down to cyber resilience — having the right controls in place. AI just increases the likelihood of attackers gaining a foothold.

David Puner: A growing concern inside organizations is shadow AI — employees using AI tools at work that haven’t been approved. What kind of risks does that pose, and how can organizations allow employees to experiment with AI without completely losing control of security?

David Higgins: The risks are pretty broad. Shadow AI is very similar to the shadow IT problems we saw when SaaS adoption took off. Departments could grab a credit card, sign up for a service, and start using it.

We then had to ask: Where is our data? Is it stored securely? How does that third-party application process and protect it? The same applies to shadow AI. What are individuals sharing? Are they using it for development and code? Are we introducing insecure code or storing secrets in these environments?

That leads to data loss, non-compliance, and broader risk. There’s also the issue of unvetted business influence. If security teams haven’t validated the AI tool, what is it responding with? What is it sharing?

It’s similar to supply chain risk. What vulnerabilities exist in the AI tool itself? If security teams haven’t had the chance to review it, that tool may introduce unknown vulnerabilities. So we’re talking about data loss, non-compliance, misinformation, increased attack surface, and unknown threats.

David Puner: As you mentioned earlier, defenders are using AI too. How is AI changing the game for defenders and security teams, and how do we make sure that when we use AI on the defense side, we’re doing it responsibly and not creating new blind spots or vulnerabilities?

David Higgins: Whenever there’s an AI discussion, somewhere down the line human oversight comes into the mix, and that’s paramount in terms of what AI tools are adopted.

Human oversight is going to be the key thing. Organizations should be assessing and validating which AI tools are right for them and which ones they’re going to make available. I think that’s why, more generally — and not just in security — organizations should have an AI code of practice that outlines what tools are available and can be used.

On the security side, AI brings a lot of benefit to defenders. We can drive more automation. We can get AI to make decisions faster. It takes some of the lower-level workload off teams so they can focus on higher-risk and more complex items.

It’s driving a lot of automation, and I don’t just mean spotting an indicator of compromise and issuing an automated response. AI can also help with day-to-day operational processes — administrative work — and simplify the lives of security administrators. There’s a lot of value to be brought from it.

That said, we need to make sure we’re not creating blind spots or vulnerabilities. Not every security or identity professional is a coder. People might think, “I’ll try some of that vibe coding everyone’s talking about.” Personally, I’ll admit I was useless at coding at university, so vibe coding has probably come about 20 years too late for me.

But when it comes to the security of the code being produced, those are real concerns. If security teams are using AI to write code to drive automation, human oversight and AI policy become really important.

David Puner: Two guys named David who can’t code, right here on Security Matters. I’m a words guy anyway. Let’s look at economic strain and the new insider threat.

Insider threats aren’t new, but they’re evolving. We used to worry mostly about a disgruntled employee with an axe to grind. Now we’re seeing more opportunistic insiders — employees tempted by bounties from cybercriminal groups. What’s driving that shift?

David Higgins: I think it’s a couple of things. There’s an understanding among attack groups — really cybercriminals — that identity is the key ribbon. Key individuals have the keys to the kingdom that allow access to the resources attackers want.

We also have cost-of-living challenges across the world. When cybercriminals offer lucrative incentives, that becomes something to worry about. Cybercriminals are opportunistic. They want the biggest return for the least effort.

If you can get an insider on your side, it’s easier to walk in than to trigger alerts by breaking in, probing vulnerabilities, or hacking credentials. If someone can just open the front door — or the side door — that’s much cleaner and much easier.

It also allows attacks to be replicated at a higher frequency. The sad reality is that for cybercriminals, it’s about making money. And the easier it is, the more attractive it becomes.

David Puner: They’re financially motivated. So how might a financially motivated insider attack unfold, and how have recent incidents foreshadowed this rising trend?

David Higgins: This may not surprise listeners, but I’m based in the UK. We’ve seen a lot of high-profile attacks paid out by various groups — Scattered Spider being one of them.

One tactic used in 2025 involved tricking help desks into resetting individuals’ credentials or MFA tokens by pretending to be that employee. The help desk resets access and provides login details.

That same attack group has now started putting out messages saying, “If you sell us your insider access and can prove it by running this command, we’ll give you a percentage of the money we make from the attack.”

When you hear about the ransoms being paid, we’re not talking about small amounts. This is how it starts — advertising broadly to people inside organizations of a certain size, in certain industries or geographies. If you’re willing to give access, they’re willing to pay a large percentage.

We’re not talking 5 percent. We’re talking 15 to 25 percent. That becomes very attractive given the socioeconomic pressures we talked about earlier.

For attackers, this bypasses reconnaissance, credential theft, and vulnerability hunting. They walk straight in. Once inside, the rest of the attack looks familiar — lateral movement, targeting key individuals.

They’re not going after just anyone. They’re targeting identity administrators, IT administrators, and people with elevated access rights.

There was a recent example where someone sold a session cookie for around $25,000. These are real examples. This isn’t theory anymore.

The real concern is that attackers are using valid credentials willingly handed over. That’s incredibly difficult to detect or prevent if you rely solely on detection-based approaches.

David Puner: You mentioned earlier that attackers advertise for insiders. How do these organized cybercrime rings actively recruit insiders from target organizations? What does that look like?

David Higgins: It’s usually messages on platforms like Signal, Telegram, or even X — social media platforms with high visibility. They’ll include a way to contact them directly.

It’s not particularly sophisticated. They’re not headhunting individuals on LinkedIn with tailored offers. It’s more about putting messages out on generic platforms and seeing who responds.

David Puner: Is it coy, or do they come right out and say what they’re looking to do?

David Higgins: They’re very direct. I’ve seen examples where they say exactly what they want — organizations in specific countries. Unsurprisingly, they exclude Russia, North Korea, Belarus, and China.

They specify organization size, the type of access they want, and even the commands they expect insiders to run as proof. They lay out the percentage they’ll pay if it works.

They’re not hiding. This is publicly available information.

David Puner: So how might this insider-for-hire trend change the way we need to approach insider defense?

David Higgins: Weirdly — and I say weirdly, but it’s not weird — it’s no different than what we would’ve done for a typical insider threat, or even when we’re looking to mitigate ransomware or data breaches. It really comes down to what’s being exploited.

They’re exploiting the fact that we often give users far too many permissions for far too long. Users have standing permissions and standing privileges in the environments they operate in, and that’s what they can sell. If you revoke that — if you move to zero standing privilege so users don’t have access until they need it — you can’t sell something you don’t have.

It becomes much harder to say, “Here’s my login, go use it,” because that identity has no permissions until it’s validated and authorized. The advice for solving this evolving insider threat challenge is no different than if someone said, “I’m worried about ransomware,” or “I’m worried about someone stealing my crown jewels.” Removing standing permissions makes lateral movement and identity compromise much more difficult and constrains what attackers can do.

David Puner: And will that equally defend against an insider who has legitimate access and is willing to abuse it for money?

David Higgins: Yeah, absolutely. You mentioned earlier the classic disgruntled employee scenario — someone overlooked for a promotion who logs in and destroys data or shuts systems down. That happens because they have open, uncontrolled access.

What I’m describing is revoking those privileges so users don’t have standing access. When they need access, they have to come to a control point and say, “I legitimately need access right now.” That triggers checks — authentication, validation, justification — before access is granted.

Nothing is 100 percent foolproof, but it makes abuse much harder. It introduces friction. It requires multiple processes to be compromised. If someone has to request access every time, justify it, and have it approved — tied to change management or line manager approval — you can spot-check and validate that access is appropriate.

It’s like giving someone a key that opens every room versus requiring them to request access to each room. I slipped into a physical analogy there, but hopefully it makes sense.

David Puner: No, we loved it. I think it makes a highlight reel for the year.

David Higgins: No worries. Glad it landed.

David Puner: Let’s zoom out to organizational culture. How do you increase vigilance for insider risks without creating a culture of mistrust — without creating a Big Brother environment?

David Higgins: We’ve talked about zero trust today, and it’s not buzzword bingo. But I’ve heard leaders say they don’t use the term internally because employees feel it means, “You don’t trust us.”

The reality is, it’s not about trusting or not trusting individuals. When an individual becomes a digital user on a network, that’s what needs to be verified. There has to be some friction in security. There’s no such thing as frictionless security — that’s just zero control.

What matters is making sure the process isn’t so heavy or cumbersome that people push back. The controls are the same whether you’re defending against insider threats or external compromise. You frame it as protecting identities, not mistrusting employees.

Communication matters, and so does usability. Tie it into the broader threat landscape instead of singling out insiders, and make sure the process doesn’t disrupt day-to-day work.

David Puner: If I’m reading between the lines, these new threats ultimately reinforce the importance of getting the basics right.

David Higgins: Absolutely. Sometimes the identity basics don’t make the top agenda because they’re not seen as exciting. But if you rewind to 2023, or even the early 2000s, the fundamentals haven’t changed.

We still see identity compromise through authentication and authorization failures. Whether it’s AI, automation, or humans, the basics matter. Regulations like NIS2 reinforce that — good cyber hygiene, strong identity foundations, visibility, and accountability.

The message is simple: get the basics right.

David Puner: How do you see the relevance of identity security now compared to three years ago?

David Higgins: Its importance has always been there — it’s been linear. What’s changed is awareness. Identity has been a consistent target, but people avoided it because it felt complex. Today, technology makes it easier.

With cloud adoption and AI, identity security keeps resurfacing as the constant challenge. It’s now firmly on executive agendas.

David Puner: You’ve written about avoiding the phishing blame game. How can organizations build a culture where employees stay vigilant and feel safe reporting mistakes?

David Higgins: Awareness has increased dramatically. Years ago, cybersecurity wasn’t something people talked about. Now people know they’ll be targeted.

There used to be shame around falling for phishing. But people have bad days, and attacks are getting more sophisticated — fake videos, audio, and urgent scenarios.

Training matters, but things will still slip through. That’s why awareness must be paired with strong identity hygiene — zero standing privilege and “always verify” approaches. Attackers evolve constantly, and we have to assume breaches will happen.

David Puner: Looking ahead to 2026 and beyond, what gives you hope?

David Higgins: Awareness is increasing. People accept physical security controls without complaint, and cyber is catching up.

Security teams are improving at internal education and collaboration. We can also use technology like AI responsibly to reduce overhead and help defenders.

Identity security is finally being recognized as foundational. To quote my favorite Christmas movie: “Welcome to the party, pal.” And yes, it is a Christmas movie.

David Puner: David Higgins, this has been fantastic. Let’s do it again sooner.

David Higgins: Thank you. Looking forward to it.

David Puner: Thanks for listening to Security Matters. Follow us wherever you get your podcasts, leave a review if you’re inclined, and reach out with questions or episode ideas at [email protected].

We’ll see you next time.