{"id":220619,"date":"2025-12-16T15:07:38","date_gmt":"2025-12-16T15:12:08","guid":{"rendered":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/"},"modified":"2026-04-14T13:10:09","modified_gmt":"2026-04-14T17:10:09","slug":"ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers","status":"publish","type":"podcast","link":"https:\/\/www.cyberark.com\/es\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/","title":{"rendered":"EP 21 &#8211; When attackers log in: Pausing for perspective in the age of instant answers"},"content":{"rendered":"<p>In this episode of Security Matters, host David Puner welcomes back David Higgins, senior director in CyberArk\u2019s Field Technology Office, for a timely conversation about the evolving cyber threat landscape. Higgins explains why today\u2019s attackers aren\u2019t breaking in\u2014they\u2019re logging in\u2014using stolen credentials, AI-powered social engineering, and deepfakes to bypass traditional defenses and exploit trust.<\/p>\n<p>The discussion explores how the rise of AI is eroding critical thinking, making it easier for even seasoned professionals to fall for convincing scams. Higgins and Puner break down the dangers of instant answers, the importance of \u201cnever trust, always verify,\u201d and why zero standing privilege is essential for defending against insider threats. They also tackle the risks of shadow AI, the growing challenge of misinformation, and how organizations can build a culture of vigilance without creating a climate of mistrust.<\/p>\n<p>Whether you\u2019re a security leader, IT professional, or just curious about the future of digital trust, this episode delivers actionable insights on identity security, cyber hygiene, and the basics that matter more than ever in 2026 and beyond.<\/p>\n<div class=\"transcript\" style=\"white-space:pre-line\">David Puner: You are listening to the Security Matters podcast. I\u2019m David Puner, a senior editorial manager at CyberArk, the global leader in identity security.<\/p>\n<p>A seasoned IT manager was having one of those days. Reports due back by noon, a grinding commute among the no-signalers \u2014 you know who you are. Back-to-back meetings, deadlines slipping, and a steady stream of fires to put out. There was barely a moment to think. Just another day.<\/p>\n<p>Really late in the afternoon, our IT manager finally settled into a brief moment of calm.<\/p>\n<p>That\u2019s when the inbox lit up \u2014 an urgent email from his daughter\u2019s school. He quickly scrutinized it. The logo looked right. The wording sounded right. But after a day like this, and with an email this convincing and personal, his usual caution wasn\u2019t enough.<\/p>\n<p>He clicked.<\/p>\n<p>That\u2019s all it took. No malware alert. No dramatic breach. Just an AI-powered piece of social engineering exploiting the very trust and mental shortcuts we develop when relying on instant answers.<\/p>\n<p>As our guest today puts it, anyone can fall for social engineering on a busy day \u2014 especially as AI tools make it easier to trust quick answers and harder to pause for critical thinking.<\/p>\n<p>That notion is at the heart of my conversation with David Higgins, senior director in CyberArk\u2019s Field Technology Office. He\u2019s been tracking how our growing reliance on AI is reshaping attacks \u2014 making them faster, more reliable, and more likely to succeed.<\/p>\n<p>As we let our guard down today on Security Matters, David explains what these shifts mean for identity trust and the security foundations organizations need.<\/p>\n<p>Before AI tools scale across the enterprise, tap the brakes and use your signals, please. Let\u2019s get into it with David Higgins.<\/p>\n<p>David Puner: David Higgins, senior director in CyberArk\u2019s Field Technology Office. Welcome back to the podcast.<\/p>\n<p>David Higgins: Thank you very much. Glad to be back. I can\u2019t believe it\u2019s been so long since we last spoke.<\/p>\n<p>David Puner: Yeah, it is kind of crazy. The last time you were on was January 2023, when the podcast was called Trust Issues.<\/p>\n<p>What\u2019s been going on in the Field Technology Office this year? You guys have been busy.<\/p>\n<p>David Higgins: Pretty busy \u2014 busy. I think every year there\u2019s always something new that\u2019s a challenge for our customers and the organizations we work with around identity.<\/p>\n<p>The big topic, as you can well imagine \u2014 and no doubt you\u2019ve had a few guests talking about this as well \u2014 is the impact of AI.<\/p>\n<p>David Puner: Oh yeah. Just one or two.<\/p>\n<p>David Higgins: One or two, right? And I think in the identity space, everyone\u2019s just trying to wrap their head around it and understand what it means. So yeah, a lot of conversations around that over the last couple of months.<\/p>\n<p>David Puner: Yeah, and I look forward to speaking with you in depth about that as we go here.<\/p>\n<p>But to bring it back to what we were talking about in January 2023 \u2014 which, incidentally, was one month before this whole generative AI thing really started en masse, or at least before everybody became aware of it \u2014 you and I were talking about securing critical infrastructure and nation-state attacks.<\/p>\n<p>Now, almost three years later, what major shifts have you seen in the cyberthreat landscape that you think are most worthy of note?<\/p>\n<p>David Higgins: Look, I\u2019m sure we\u2019ll definitely talk more about the implications of AI for sure. It\u2019s something we weren\u2019t talking about too much back then. As you say, generative AI was certainly getting attention, but the cybersecurity aspect not so much \u2014 and we\u2019re certainly seeing that now.<\/p>\n<p>Cybercrime was still very much present, and it\u2019s become a fully fledged industry. Different attackers are breaking down their specialisms and coordinating in ways perhaps not seen before, posing different challenges.<\/p>\n<p>Social engineering continues to be a focus, and identity \u2014 that\u2019s no different than what we spoke about a few years back.<\/p>\n<p>I hear the clich\u00e9 term \u2014 and I apologize for using clich\u00e9 terms \u2014 but I hear this a lot from security leaders now. The way they see it, and the way they worry, is that people aren\u2019t breaking into organizations anymore \u2014 they\u2019re logging in.<\/p>\n<p>I don\u2019t think I used that term before, so that\u2019s something new since we last spoke.<\/p>\n<p>David Puner: All right. Well, we waited three years to get that, and I think it was worth it. Thanks for coming onto the podcast. Great speaking with you.<\/p>\n<p>So back then, January 2023, critical infrastructure and nation-sponsored threats were front and center. Do they still top the list, or have things like AI-driven misinformation and insider risks related to AI taken over as the critical threats organizations worry about most?<\/p>\n<p>David Higgins: It\u2019s still there for sure. Again, the socioeconomic climate that we talked about in 2023 is still there \u2014 if not, things have probably gotten a little murkier. There\u2019s a lot more tension in the world than there has been, and that puts critical infrastructure directly in the crosshairs for sure.<\/p>\n<p>There\u2019s a lot more regulation and a lot more pressure on organizations in that space to beef up their cyber defenses. The world is more connected. We\u2019ve seen a lot of examples. But what AI and insider threats have done, if anything, is just heighten that risk \u2014 for critical infrastructure providers, but also for all industries across the globe \u2014 bringing different challenges for defenders, attackers, and the attack landscape, as well as the insider threat.<\/p>\n<p>We\u2019ve seen that play out quite heavily in 2025, and I think we\u2019re going to see a lot more of that next year too.<\/p>\n<p>David Puner: AI is making it easier than ever to get instant answers, but you\u2019ve said elsewhere that it comes at the cost of eroding critical thinking. How could AI tools actually make people less inclined to double-check what they see?<\/p>\n<p>David Higgins: There are a lot of examples of this. For those listening, just Google big mistakes or misinformation provided by AI \u2014 you\u2019ll see a load of examples.<\/p>\n<p>What it really comes down to is too much trust in the results it provides. We\u2019re all using it, right? We\u2019re all using it to help generate documents. I think heavy usage of it is almost acting like another abstraction layer in internet searches. We used to go directly to search engines and look for what we were searching for. Now we just go to AI, ask it a question, and it goes off, trolls the internet, trolls what it already knows, and provides some answers.<\/p>\n<p>A lot of people take that at face value rather than validating the sources it\u2019s come from. And that\u2019s part of where I was coming from in that earlier statement you mentioned. People are taking what it says at face value, and we\u2019ve got to remember AI doesn\u2019t really understand what we\u2019re telling it, nor what it\u2019s processing.<\/p>\n<p>So we have to be a little cautious when we regurgitate and reprocess the results it sends us.<\/p>\n<p>David Puner: How do we do that? How do we, as a whole, know that that\u2019s part of the equation and do that extra work?<\/p>\n<p>David Higgins: It reminds me \u2014 sorry to give you a boring story \u2014 of doing history at school. When you\u2019re taught about things that happened in the past, many times ancient times, they tell you that to validate history, you need multiple sources. You validate facts from differing opinions and different media.<\/p>\n<p>I\u2019m not saying you do that for every single thing, because people will say, well, what\u2019s the point of using AI then? But you do need to be cognizant of the sources it\u2019s pulling from, and you need to ensure more trusted sources are being leveraged in your search context.<\/p>\n<p>You can provide prompts to AI and say, \u201cCould you validate this against this government site or this organization?\u201d rather than blindly letting it troll whatever sites it finds on the internet. Quite often, it trolls through Reddit and starts regurgitating people\u2019s opinions as facts. That\u2019s the worrying area \u2014 opinions becoming facts simply because that\u2019s what it finds.<\/p>\n<p>David Puner: From a security perspective, why is the erosion of skepticism so dangerous? Phishing is still one of the top ways breaches happen, and now attackers have AI to whip up deepfake emails, voices, and videos that look totally legitimate. Are we basically at a point where anyone could fall for a well-crafted scam?<\/p>\n<p>David Higgins: If you\u2019d asked me this question three or five years ago, I would\u2019ve said the same thing \u2014 anyone, on a bad day, can fall victim to social engineering. We\u2019re rushed. We\u2019re bouncing between things. We were talking off air about how I went from one webinar and then had to quickly go pick up my kids and my dog. You\u2019re rushing around, and that\u2019s when your guard is down.<\/p>\n<p>On any particular day, someone could fall victim to social engineering, even without AI. The worrying thing is that AI heightens this because it can be much more crafted.<\/p>\n<p>I remember a few years back a security leader telling me they\u2019d received a well-crafted social engineering email that looked like it came from their child\u2019s school. This was pre-generative AI. What parent isn\u2019t going to check that email or open that attachment?<\/p>\n<p>That\u2019s exactly what generative AI brings to social engineering. You can gather as much information as possible about an individual and craft something that\u2019s not generic spray-and-pray, but absolutely nailed-on with highly specific details. And that\u2019s what we\u2019re going to see.<\/p>\n<p>We\u2019re going to see an increase in the success rates of social engineering, which is, of course, a worrying aspect for cyber defenders.<\/p>\n<p>David Puner: Without a doubt, malicious actors can flood the internet with false narratives that AI platforms might pick up and inadvertently amplify. How do you see that playing out in the near future? Could a company\u2019s chatbot end up spreading misinformation that an attacker planted?<\/p>\n<p>David Higgins: Definitely. If you look at the more generally accessible AI tools, my concern is where they\u2019re getting their data from. They can turn opinion or nonsense into fact and say, \u201cThis is how you solve that problem.\u201d<\/p>\n<p>We\u2019ve already touched on this. There was the case with the lawyers in the US who used AI to validate their arguments, and the cases it cited were completely fictitious. They didn\u2019t exist.<\/p>\n<p>Then you\u2019ve got organizations building internal AI tools based on their own data sets. That\u2019s another area of concern. A lot of the AI conversation focuses on how attackers use it, or how defenders use it. But we also need to consider AI itself as a victim.<\/p>\n<p>If an attacker gets into an organization\u2019s environment and manipulates it \u2014 poisons the data, hijacks the system \u2014 then that chatbot could start sharing things, saying things, or encouraging actions it absolutely shouldn\u2019t be doing.<\/p>\n<p>David Puner: I want to ask you about how organizations can counter this threat, but before that, to stay on this thread \u2014 when talking about false narratives and the potential for AI to spread misinformation \u2014 what are the ripple effects on organizations, or the potential ripple effects?<\/p>\n<p>David Higgins: Yeah, look, I think it\u2019s broad. The risks are pretty broad, right? From the obvious reputational damage all the way to loss of IP. What are you allowing these AI bots, agents, et cetera, to query and access? And if they\u2019re revealing information they shouldn\u2019t, are we losing IP?<\/p>\n<p>If it\u2019s producing misinformation, we\u2019re talking about brand damage. We\u2019re talking about loss of revenue. I believe the implications are pretty broad in terms of the impact this could really have.<\/p>\n<p>The way I look at it is there\u2019s been a lot of talk in cybersecurity about supply chain attacks \u2014 something we\u2019ve needed to be concerned about. Our investment in AI has to be something we look at through that same lens. 2026 could be the year AI is depicted from an attack perspective and what those impacts could be if it starts saying things it shouldn\u2019t be saying or revealing things it shouldn\u2019t reveal. That has to be on the radar for a lot of security leaders right now.<\/p>\n<p>David Puner: So a lot of hypothetical scenarios, a lot of unknowns, but also a lot happening right now. How should \u2014 or can \u2014 organizations counter this threat? Is it about training employees to stay skeptical and verify sources, or are there technical steps like AI content detection or stricter verification steps to make sure employees don\u2019t blindly trust every AI-generated answer or realistic-looking email that lands in their inbox?<\/p>\n<p>David Higgins: There is absolutely a user education piece that\u2019s needed. As I\u2019ve said, there\u2019s almost too much trust in the results people get from AI interactions. There\u2019s a willingness to trust what it does.<\/p>\n<p>On one hand, it\u2019s great that people are willing to embrace new technology, but at the same time, there\u2019s probably too much trust in it. So user education is important \u2014 not just talking about the risks, but helping people truly understand how it works.<\/p>\n<p>AI, generative AI, large language models \u2014 they\u2019re algorithms. They don\u2019t truly understand. They don\u2019t actually have the context of what you\u2019re asking. Getting people to understand what they\u2019re interacting with, what value these tools bring, and where their limits are is critical.<\/p>\n<p>That awareness should trigger the instinct to validate sources. Most of the time when you interact with Gemini, Claude, ChatGPT, or whatever tool it may be, you\u2019ll see a link or chain icon showing where the sources came from. Before taking that as fact, validate it against a trusted source. Most .org sites are generally reliable to an extent. Nonprofit organizations are usually good places to validate information.<\/p>\n<p>Just double-check your homework. There\u2019s also pressure on organizations internally as they embrace AI for optimization and better customer interaction. They probably need to do a better job of highlighting where these sources come from. It\u2019s also an interesting area for marketing teams \u2014 how AI responds positively about your organization compared to competitors.<\/p>\n<p>David Puner: Interesting. So you mentioned trust. You\u2019re suggesting we shouldn\u2019t take any of this at face value and that we should double-check all of it. Are you saying organizations and employees should practice zero trust when it comes to information they\u2019re getting from LLMs, or is that even reasonable with the amount of queries and data flying around?<\/p>\n<p>David Higgins: I like that, because the tagline of zero trust is \u201cnever trust, always verify.\u201d<\/p>\n<p>David Puner: Never trust, always verify.<\/p>\n<p>David Higgins: Yeah. And I think that absolutely applies. AI is a great tool for drawing through huge volumes of data, condensing it, and putting it into a digestible format. But it should always come with verification \u2014 where did this data come from, what sources is it pulling from?<\/p>\n<p>We\u2019ve already seen examples, like the legal firm that took AI-generated information at face value and used it in a legal proceeding. That obviously backfired massively. They were in the position of saying, \u201cWe didn\u2019t realize it could hallucinate or fight misinformation.\u201d<\/p>\n<p>When it\u2019s trawling through opinion-based platforms like Reddit and regurgitating that as fact, that\u2019s where the danger lies. I like the phrase \u201cnever trust, always verify,\u201d though maybe just \u201calways verify\u201d because people get scared by the \u201cnever trust\u201d part. But \u201calways verify\u201d is a good tagline.<\/p>\n<p>David Puner: Clicking the links in those LLMs when they suggest them \u2014 you want the link to be something you trust. But even if it\u2019s a trusted source, should you be clicking those links?<\/p>\n<p>David Higgins: It\u2019s a good point. We talked earlier about attackers manipulating data so AI says or reveals things it shouldn\u2019t. If the result isn\u2019t just misinformation but inappropriate links, that\u2019s another concern.<\/p>\n<p>Due diligence is absolutely needed. From the defender\u2019s side, there\u2019s always that balance between educating users to always verify and assuming people will still do the wrong thing. They\u2019ll click the wrong link. They\u2019ll open the wrong attachment.<\/p>\n<p>So how do you make sure one mistake doesn\u2019t expose the rest of the organization? That comes down to cyber resilience \u2014 having the right controls in place. AI just increases the likelihood of attackers gaining a foothold.<\/p>\n<p>David Puner: A growing concern inside organizations is shadow AI \u2014 employees using AI tools at work that haven\u2019t been approved. What kind of risks does that pose, and how can organizations allow employees to experiment with AI without completely losing control of security?<\/p>\n<p>David Higgins: The risks are pretty broad. Shadow AI is very similar to the shadow IT problems we saw when SaaS adoption took off. Departments could grab a credit card, sign up for a service, and start using it.<\/p>\n<p>We then had to ask: Where is our data? Is it stored securely? How does that third-party application process and protect it? The same applies to shadow AI. What are individuals sharing? Are they using it for development and code? Are we introducing insecure code or storing secrets in these environments?<\/p>\n<p>That leads to data loss, non-compliance, and broader risk. There\u2019s also the issue of unvetted business influence. If security teams haven\u2019t validated the AI tool, what is it responding with? What is it sharing?<\/p>\n<p>It\u2019s similar to supply chain risk. What vulnerabilities exist in the AI tool itself? If security teams haven\u2019t had the chance to review it, that tool may introduce unknown vulnerabilities. So we\u2019re talking about data loss, non-compliance, misinformation, increased attack surface, and unknown threats.<\/p>\n<p>David Puner: As you mentioned earlier, defenders are using AI too. How is AI changing the game for defenders and security teams, and how do we make sure that when we use AI on the defense side, we\u2019re doing it responsibly and not creating new blind spots or vulnerabilities?<\/p>\n<p>David Higgins: Whenever there\u2019s an AI discussion, somewhere down the line human oversight comes into the mix, and that\u2019s paramount in terms of what AI tools are adopted.<\/p>\n<p>Human oversight is going to be the key thing. Organizations should be assessing and validating which AI tools are right for them and which ones they\u2019re going to make available. I think that\u2019s why, more generally \u2014 and not just in security \u2014 organizations should have an AI code of practice that outlines what tools are available and can be used.<\/p>\n<p>On the security side, AI brings a lot of benefit to defenders. We can drive more automation. We can get AI to make decisions faster. It takes some of the lower-level workload off teams so they can focus on higher-risk and more complex items.<\/p>\n<p>It\u2019s driving a lot of automation, and I don\u2019t just mean spotting an indicator of compromise and issuing an automated response. AI can also help with day-to-day operational processes \u2014 administrative work \u2014 and simplify the lives of security administrators. There\u2019s a lot of value to be brought from it.<\/p>\n<p>That said, we need to make sure we\u2019re not creating blind spots or vulnerabilities. Not every security or identity professional is a coder. People might think, \u201cI\u2019ll try some of that vibe coding everyone\u2019s talking about.\u201d Personally, I\u2019ll admit I was useless at coding at university, so vibe coding has probably come about 20 years too late for me.<\/p>\n<p>But when it comes to the security of the code being produced, those are real concerns. If security teams are using AI to write code to drive automation, human oversight and AI policy become really important.<\/p>\n<p>David Puner: Two guys named David who can\u2019t code, right here on Security Matters. I\u2019m a words guy anyway. Let\u2019s look at economic strain and the new insider threat.<\/p>\n<p>Insider threats aren\u2019t new, but they\u2019re evolving. We used to worry mostly about a disgruntled employee with an axe to grind. Now we\u2019re seeing more opportunistic insiders \u2014 employees tempted by bounties from cybercriminal groups. What\u2019s driving that shift?<\/p>\n<p>David Higgins: I think it\u2019s a couple of things. There\u2019s an understanding among attack groups \u2014 really cybercriminals \u2014 that identity is the key ribbon. Key individuals have the keys to the kingdom that allow access to the resources attackers want.<\/p>\n<p>We also have cost-of-living challenges across the world. When cybercriminals offer lucrative incentives, that becomes something to worry about. Cybercriminals are opportunistic. They want the biggest return for the least effort.<\/p>\n<p>If you can get an insider on your side, it\u2019s easier to walk in than to trigger alerts by breaking in, probing vulnerabilities, or hacking credentials. If someone can just open the front door \u2014 or the side door \u2014 that\u2019s much cleaner and much easier.<\/p>\n<p>It also allows attacks to be replicated at a higher frequency. The sad reality is that for cybercriminals, it\u2019s about making money. And the easier it is, the more attractive it becomes.<\/p>\n<p>David Puner: They\u2019re financially motivated. So how might a financially motivated insider attack unfold, and how have recent incidents foreshadowed this rising trend?<\/p>\n<p>David Higgins: This may not surprise listeners, but I\u2019m based in the UK. We\u2019ve seen a lot of high-profile attacks paid out by various groups \u2014 Scattered Spider being one of them.<\/p>\n<p>One tactic used in 2025 involved tricking help desks into resetting individuals\u2019 credentials or MFA tokens by pretending to be that employee. The help desk resets access and provides login details.<\/p>\n<p>That same attack group has now started putting out messages saying, \u201cIf you sell us your insider access and can prove it by running this command, we\u2019ll give you a percentage of the money we make from the attack.\u201d<\/p>\n<p>When you hear about the ransoms being paid, we\u2019re not talking about small amounts. This is how it starts \u2014 advertising broadly to people inside organizations of a certain size, in certain industries or geographies. If you\u2019re willing to give access, they\u2019re willing to pay a large percentage.<\/p>\n<p>We\u2019re not talking 5 percent. We\u2019re talking 15 to 25 percent. That becomes very attractive given the socioeconomic pressures we talked about earlier.<\/p>\n<p>For attackers, this bypasses reconnaissance, credential theft, and vulnerability hunting. They walk straight in. Once inside, the rest of the attack looks familiar \u2014 lateral movement, targeting key individuals.<\/p>\n<p>They\u2019re not going after just anyone. They\u2019re targeting identity administrators, IT administrators, and people with elevated access rights.<\/p>\n<p>There was a recent example where someone sold a session cookie for around $25,000. These are real examples. This isn\u2019t theory anymore.<\/p>\n<p>The real concern is that attackers are using valid credentials willingly handed over. That\u2019s incredibly difficult to detect or prevent if you rely solely on detection-based approaches.<\/p>\n<p>David Puner: You mentioned earlier that attackers advertise for insiders. How do these organized cybercrime rings actively recruit insiders from target organizations? What does that look like?<\/p>\n<p>David Higgins: It\u2019s usually messages on platforms like Signal, Telegram, or even X \u2014 social media platforms with high visibility. They\u2019ll include a way to contact them directly.<\/p>\n<p>It\u2019s not particularly sophisticated. They\u2019re not headhunting individuals on LinkedIn with tailored offers. It\u2019s more about putting messages out on generic platforms and seeing who responds.<\/p>\n<p>David Puner: Is it coy, or do they come right out and say what they\u2019re looking to do?<\/p>\n<p>David Higgins: They\u2019re very direct. I\u2019ve seen examples where they say exactly what they want \u2014 organizations in specific countries. Unsurprisingly, they exclude Russia, North Korea, Belarus, and China.<\/p>\n<p>They specify organization size, the type of access they want, and even the commands they expect insiders to run as proof. They lay out the percentage they\u2019ll pay if it works.<\/p>\n<p>They\u2019re not hiding. This is publicly available information.<\/p>\n<p>David Puner: So how might this insider-for-hire trend change the way we need to approach insider defense?<\/p>\n<p>David Higgins: Weirdly \u2014 and I say weirdly, but it\u2019s not weird \u2014 it\u2019s no different than what we would\u2019ve done for a typical insider threat, or even when we\u2019re looking to mitigate ransomware or data breaches. It really comes down to what\u2019s being exploited.<\/p>\n<p>They\u2019re exploiting the fact that we often give users far too many permissions for far too long. Users have standing permissions and standing privileges in the environments they operate in, and that\u2019s what they can sell. If you revoke that \u2014 if you move to zero standing privilege so users don\u2019t have access until they need it \u2014 you can\u2019t sell something you don\u2019t have.<\/p>\n<p>It becomes much harder to say, \u201cHere\u2019s my login, go use it,\u201d because that identity has no permissions until it\u2019s validated and authorized. The advice for solving this evolving insider threat challenge is no different than if someone said, \u201cI\u2019m worried about ransomware,\u201d or \u201cI\u2019m worried about someone stealing my crown jewels.\u201d Removing standing permissions makes lateral movement and identity compromise much more difficult and constrains what attackers can do.<\/p>\n<p>David Puner: And will that equally defend against an insider who has legitimate access and is willing to abuse it for money?<\/p>\n<p>David Higgins: Yeah, absolutely. You mentioned earlier the classic disgruntled employee scenario \u2014 someone overlooked for a promotion who logs in and destroys data or shuts systems down. That happens because they have open, uncontrolled access.<\/p>\n<p>What I\u2019m describing is revoking those privileges so users don\u2019t have standing access. When they need access, they have to come to a control point and say, \u201cI legitimately need access right now.\u201d That triggers checks \u2014 authentication, validation, justification \u2014 before access is granted.<\/p>\n<p>Nothing is 100 percent foolproof, but it makes abuse much harder. It introduces friction. It requires multiple processes to be compromised. If someone has to request access every time, justify it, and have it approved \u2014 tied to change management or line manager approval \u2014 you can spot-check and validate that access is appropriate.<\/p>\n<p>It\u2019s like giving someone a key that opens every room versus requiring them to request access to each room. I slipped into a physical analogy there, but hopefully it makes sense.<\/p>\n<p>David Puner: No, we loved it. I think it makes a highlight reel for the year.<\/p>\n<p>David Higgins: No worries. Glad it landed.<\/p>\n<p>David Puner: Let\u2019s zoom out to organizational culture. How do you increase vigilance for insider risks without creating a culture of mistrust \u2014 without creating a Big Brother environment?<\/p>\n<p>David Higgins: We\u2019ve talked about zero trust today, and it\u2019s not buzzword bingo. But I\u2019ve heard leaders say they don\u2019t use the term internally because employees feel it means, \u201cYou don\u2019t trust us.\u201d<\/p>\n<p>The reality is, it\u2019s not about trusting or not trusting individuals. When an individual becomes a digital user on a network, that\u2019s what needs to be verified. There has to be some friction in security. There\u2019s no such thing as frictionless security \u2014 that\u2019s just zero control.<\/p>\n<p>What matters is making sure the process isn\u2019t so heavy or cumbersome that people push back. The controls are the same whether you\u2019re defending against insider threats or external compromise. You frame it as protecting identities, not mistrusting employees.<\/p>\n<p>Communication matters, and so does usability. Tie it into the broader threat landscape instead of singling out insiders, and make sure the process doesn\u2019t disrupt day-to-day work.<\/p>\n<p>David Puner: If I\u2019m reading between the lines, these new threats ultimately reinforce the importance of getting the basics right.<\/p>\n<p>David Higgins: Absolutely. Sometimes the identity basics don\u2019t make the top agenda because they\u2019re not seen as exciting. But if you rewind to 2023, or even the early 2000s, the fundamentals haven\u2019t changed.<\/p>\n<p>We still see identity compromise through authentication and authorization failures. Whether it\u2019s AI, automation, or humans, the basics matter. Regulations like NIS2 reinforce that \u2014 good cyber hygiene, strong identity foundations, visibility, and accountability.<\/p>\n<p>The message is simple: get the basics right.<\/p>\n<p>David Puner: How do you see the relevance of identity security now compared to three years ago?<\/p>\n<p>David Higgins: Its importance has always been there \u2014 it\u2019s been linear. What\u2019s changed is awareness. Identity has been a consistent target, but people avoided it because it felt complex. Today, technology makes it easier.<\/p>\n<p>With cloud adoption and AI, identity security keeps resurfacing as the constant challenge. It\u2019s now firmly on executive agendas.<\/p>\n<p>David Puner: You\u2019ve written about avoiding the phishing blame game. How can organizations build a culture where employees stay vigilant and feel safe reporting mistakes?<\/p>\n<p>David Higgins: Awareness has increased dramatically. Years ago, cybersecurity wasn\u2019t something people talked about. Now people know they\u2019ll be targeted.<\/p>\n<p>There used to be shame around falling for phishing. But people have bad days, and attacks are getting more sophisticated \u2014 fake videos, audio, and urgent scenarios.<\/p>\n<p>Training matters, but things will still slip through. That\u2019s why awareness must be paired with strong identity hygiene \u2014 zero standing privilege and \u201calways verify\u201d approaches. Attackers evolve constantly, and we have to assume breaches will happen.<\/p>\n<p>David Puner: Looking ahead to 2026 and beyond, what gives you hope?<\/p>\n<p>David Higgins: Awareness is increasing. People accept physical security controls without complaint, and cyber is catching up.<\/p>\n<p>Security teams are improving at internal education and collaboration. We can also use technology like AI responsibly to reduce overhead and help defenders.<\/p>\n<p>Identity security is finally being recognized as foundational. To quote my favorite Christmas movie: \u201cWelcome to the party, pal.\u201d And yes, it is a Christmas movie.<\/p>\n<p>David Puner: David Higgins, this has been fantastic. Let\u2019s do it again sooner.<\/p>\n<p>David Higgins: Thank you. Looking forward to it.<\/p>\n<p>David Puner: Thanks for listening to Security Matters. Follow us wherever you get your podcasts, leave a review if you\u2019re inclined, and reach out with questions or episode ideas at securitymatterspodcast@cyberark.com. <\/p>\n<p>We\u2019ll see you next time.<\/p><\/div>\n","protected":false},"featured_media":220620,"template":"","class_list":["post-220619","podcast","type-podcast","status-publish","has-post-thumbnail","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>EP 21 - When attackers log in: Pausing for perspective in the age of instant answers | CyberArk<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"EP 21 - When attackers log in: Pausing for perspective in the age of instant answers\" \/>\n<meta property=\"og:description\" content=\"In this episode of Security Matters, host David Puner welcomes back David Higgins, senior director in CyberArk\u2019s Field Technology Office, for a timely conversation about the evolving cyber threat landscape. Higgins explains why today\u2019s attackers...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/\" \/>\n<meta property=\"og:site_name\" content=\"CyberArk\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/CyberArk\/\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-14T17:10:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"1400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@CyberArk\" \/>\n<meta name=\"twitter:label1\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data1\" content=\"26 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/\",\"url\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/\",\"name\":\"EP 21 - When attackers log in: Pausing for perspective in the age of instant answers | CyberArk\",\"isPartOf\":{\"@id\":\"https:\/\/www.cyberark.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg\",\"datePublished\":\"2025-12-16T15:12:08+00:00\",\"dateModified\":\"2026-04-14T17:10:09+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg\",\"width\":1400,\"height\":1400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.cyberark.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"EP 21 &#8211; When attackers log in: Pausing for perspective in the age of instant answers\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.cyberark.com\/#website\",\"url\":\"https:\/\/www.cyberark.com\/\",\"name\":\"CyberArk\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.cyberark.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.cyberark.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.cyberark.com\/#organization\",\"name\":\"CyberArk Software\",\"url\":\"https:\/\/www.cyberark.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"contentUrl\":\"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg\",\"width\":\"1024\",\"height\":\"1024\",\"caption\":\"CyberArk Software\"},\"image\":{\"@id\":\"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/CyberArk\/\",\"https:\/\/x.com\/CyberArk\",\"https:\/\/www.linkedin.com\/company\/cyber-ark-software\/\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"EP 21 - When attackers log in: Pausing for perspective in the age of instant answers | CyberArk","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/","og_locale":"es_ES","og_type":"article","og_title":"EP 21 - When attackers log in: Pausing for perspective in the age of instant answers","og_description":"In this episode of Security Matters, host David Puner welcomes back David Higgins, senior director in CyberArk\u2019s Field Technology Office, for a timely conversation about the evolving cyber threat landscape. Higgins explains why today\u2019s attackers...","og_url":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/","og_site_name":"CyberArk","article_publisher":"https:\/\/www.facebook.com\/CyberArk\/","article_modified_time":"2026-04-14T17:10:09+00:00","og_image":[{"width":1400,"height":1400,"url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@CyberArk","twitter_misc":{"Tiempo de lectura":"26 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/","url":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/","name":"EP 21 - When attackers log in: Pausing for perspective in the age of instant answers | CyberArk","isPartOf":{"@id":"https:\/\/www.cyberark.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage"},"image":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg","datePublished":"2025-12-16T15:12:08+00:00","dateModified":"2026-04-14T17:10:09+00:00","breadcrumb":{"@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#primaryimage","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2025\/12\/YzA2NS5qcGc.jpg","width":1400,"height":1400},{"@type":"BreadcrumbList","@id":"https:\/\/www.cyberark.com\/podcasts\/ep-21-when-attackers-log-in-pausing-for-perspective-in-the-age-of-instant-answers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.cyberark.com\/"},{"@type":"ListItem","position":2,"name":"EP 21 &#8211; When attackers log in: Pausing for perspective in the age of instant answers"}]},{"@type":"WebSite","@id":"https:\/\/www.cyberark.com\/#website","url":"https:\/\/www.cyberark.com\/","name":"CyberArk","description":"","publisher":{"@id":"https:\/\/www.cyberark.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.cyberark.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/www.cyberark.com\/#organization","name":"CyberArk Software","url":"https:\/\/www.cyberark.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","contentUrl":"https:\/\/www.cyberark.com\/wp-content\/uploads\/2021\/02\/cyberark-logo-dark.svg","width":"1024","height":"1024","caption":"CyberArk Software"},"image":{"@id":"https:\/\/www.cyberark.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/CyberArk\/","https:\/\/x.com\/CyberArk","https:\/\/www.linkedin.com\/company\/cyber-ark-software\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/podcast\/220619","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/podcast"}],"about":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/types\/podcast"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/media\/220620"}],"wp:attachment":[{"href":"https:\/\/www.cyberark.com\/es\/wp-json\/wp\/v2\/media?parent=220619"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}