
Gone are the days when attackers had to break down doors. Now, they just log in with what look like legitimate credentials. This shift in tactics has been underway for a while, but the rapid adoption of artificial intelligence is adding a new layer of complexity. AI is a powerful tool, but our growing reliance on it comes with a catch: it’s eroding our critical thinking skills. We’re starting to place too much trust in the results AI provides, and that’s creating significant security risks.
As we integrate AI into our daily workflows, from drafting emails to analyzing data, we’re also inadvertently training ourselves to accept its output as fact. But are we really pausing to double-check what AI tells us—or are we letting convenience override caution?
The problem is, AI doesn’t actually understand what we’re telling it or what it’s processing. It’s a remarkably sophisticated pattern-matching engine, but it lacks true comprehension. This blind trust can lead to serious consequences, especially when attackers begin to exploit our reliance on these still-maturing technologies.

How AI is changing social engineering
Social engineering isn’t what it used to be. AI has taken it from a scattergun approach to a sniper’s precision. Previously, attackers relied on generic phishing emails and broad tactics, hoping someone would take the bait. Now, AI allows them to craft highly personalized and convincing attacks. They can scrape social media and other public data to create messages that seem plausible and are precisely tailored to a specific individual.
Imagine a phishing email that references a project you just discussed in a meeting, uses the exact tone of your CEO, and includes specific details about your role. That’s the level of customization AI makes possible. It moves social engineering from a scattergun approach to a sniper’s precision, making it incredibly difficult for even savvy employees to spot a fake.
The hidden risks of shadow AI
Many organizations are building internal AI tools using their own private datasets. Smart move—but let’s not kid ourselves, shadow AI is just shadow IT in a new outfit. It allows them to leverage AI’s capabilities within a more controlled environment. However, a familiar problem is re-emerging in a new form: shadow AI.
Just as shadow IT saw employees using unapproved software and devices, we’re now seeing the use of unsanctioned AI tools. Employees might use public AI models to analyze sensitive company data, unknowingly exposing it to potential breaches. The risks are similar to those seen with shadow IT, but with AI, organizations now face a triple threat of AI: data leakage, sophisticated social engineering, and insider risk. The speed and scale of potential data exfiltration are much greater.
To counter this, organizations need a clear AI code of practice. The goal is to use AI responsibly, not to ban it. They also need clear guidelines for its use, including which tools are approved, what data can be used, and what level of human oversight is required. Speaking of which, human oversight is paramount. We can’t afford to let AI operate without a person in the loop to validate its outputs and decisions.
The growing importance of identity security in an AI-powered world
In this new landscape, identity has become the central pillar of security. The old model of a strong perimeter is no longer sufficient when the threat is already inside, using what appear to be legitimate credentials. The principle of zero trust—never trust, always verify—is more critical than ever.
This means we must rigorously verify every identity, whether human or machine, before granting access to anything. It’s about ensuring that the person or system logging in is who they say they are. But it also goes a step further. We need to remove standing permissions and default privileges, especially in critical environments. Users should only have access to what they need, for as long as they need it. This principle of least privilege is a fundamental defense against both external attackers and evolving insider threats.
Why AI matters in the fight against modern insider risks
The nature of insider threats is also changing. It’s no longer just about a disgruntled employee looking for revenge. Cybercriminal groups are now actively recruiting employees, offering substantial bounties for access to corporate networks. Tempted by a big payday, an employee might use their legitimate credentials to provide an entry point for an attack.
This is where a strong identity security posture becomes a crucial line of defense. By enforcing just-in-time access and removing standing privileges, you significantly reduce the window of opportunity for a malicious insider to act. If an employee has to request access for a specific task, and that access is granted for a limited time, it becomes much harder to abuse their position without raising red flags.
And as insider recruitment becomes more targeted and sophisticated, organizations will increasingly need to use AI‑driven analytics to detect behavioral anomalies and identify malicious insiders before damage occurs.
Strengthening cybersecurity fundamentals in an evolving threat landscape
While the threats are evolving, the solutions often come back to mastering the fundamentals. New attack vectors reinforce the importance of getting the basics right. Identity security should be a top priority for every organization, supported by better awareness and more collaborative security teams.
Encouraging signs are emerging across the industry. Security teams are getting better at internal marketing, communicating the “why” behind security policies and working with other departments to build a stronger security culture. As Bruce Willis and I like to say, “Welcome to the party, pal.” In identity security, the party never really ends. And this collaborative approach is essential for navigating the challenges posed by AI and other emerging technologies.
AI is here to stay, and its impact on cybersecurity will only grow. By focusing on strong identity security principles, promoting critical thinking, and establishing clear governance, we can harness the power of AI without falling victim to its risks. We can’t afford to be complacent. We must approach this new era with a healthy dose of skepticism and a renewed commitment to the core principles of zero trust.
David Higgins is a senior director in CyberArk’s Field Technology Office.
🎧 Listen Now: Dive deeper into the evolving risks and realities of AI-powered social engineering with David Higgins, the author of this blog, in the Security Matters podcast episode, “When attackers log in: Pausing for perspective in the age of instant answers.” Hear firsthand how attackers are exploiting AI to outsmart defenses, why identity and zero trust are more critical than ever, and what organizations must do to stay resilient. From shadow AI to insider threats, get practical insights and real-world stories. Available below and on most podcast platforms.





















