Protecting a large enterprise is like playing goalkeeper in a soccer match. A CISO’s job is to keep the net clean while multiple attackers close in from various angles, aiming to score. No matter how many shots the goalie blocks, a single goal can win the game for the opposition.
Now, imagine every player on the field is wearing the same uniform and constantly changing sides and positions. Can you even tell who’s on your team? It’s been a long time since security teams could easily distinguish the good guys from the bad simply by looking at the firewall. Today, users’ digital journeys crisscross from endpoints to the network to the cloud, retracing the security perimeter around identities and creating an unbalanced tension between attacker and defender.
The Machine Identity Explosion
In 2024, more than half of all initial access incidents began with an identity-related attack vector. This year, we can expect that percentage to climb as AI bot and agent usage rises and the total number of identities explodes. To put this in perspective, machine identities will soon outnumber human identities at least 100 to one, and attackers will use the odds to their advantage. They will leverage the privileges and access that those AI agents have to data stores, IT platforms, marketing hubs and more.
We’re already seeing this unfold as iGEN architectures gain popularity. These environments deploy multiple AI-based services working autonomously across various phases, such as analyzing stats, creating reports and sharing results. But in these scenarios, sensitive information—like credentials—could be exposed across different user sessions with the model. Worse, AI services’ credentials, lacking multi-factor authentication (MFA), could be compromised, granting attackers access to critical systems.
Another growing concern is that malicious actors might weaponize AI agents to compromise other agents, creating a new type of stealthy insider threat. Already, there have been documented cases of AI systems attempting to bypass controls and persist on servers. Increasingly, attackers could look to hijack existing agents within organizations, using them to carry out tasks on their behalf. We got a glimpse of this recently when a critical flaw was discovered in the widely used Model Context Protocol (MCP) that allows communication between AI agents.
Across enterprises, new AI ecosystems are forming at an unprecedented rate. This machine identity “explosion” dramatically elevates cybersecurity risk. Every identity must be governed, and achieving this at scale presents one of the greatest security challenges of our time. This is where policy enforcement, automation and AI play such critical roles for defenders.
The Weaponization of AI
Fifteen years ago, when I worked in the cyberwarfare trenches, I never imagined phishing attacks would still be so prevalent in 2025. Yet here we are—phishing remains one of the most dominant attack vectors, amplified by AI.
Of course, phishing campaigns are evolving—not always in the way they’re delivered (email and messaging apps continue to be key channels), but when they happen in the digital journey. Attackers realize that when they steal a credential and use it for authentication, they’ll have to bypass MFA too. Instead of jumping through these multiple hoops, they’ve shifted to post-authentication attacks that target cookies created after the user authenticates into a system.
Consider this example attack scenario: A threat actor runs a command-and-control server waiting for incoming cookies. The unsuspecting victim—who is already connected to their company’s SSO—now connects to Salesforce to perform a task. This is an important moment because the attacker is looking for cookies that are created after the user authenticates in. The victim then receives the phishing email, opens it, clicks on a malicious link and—thanks to a vulnerable configuration in the application—the cookies travel right to the attacker. The victim’s browser directs the user to a legitimate page, and nothing seems out of the ordinary. But behind the scenes, the attacker now has access to the application and can essentially “steal” the session. With cookies in hand, the attacker can go even further by changing the MFA device linked to the compromised account, changing the password, and boom—completely taking over the account.
AI is elevating these tactics to a new level. Voice phishing (vishing) and deepfake videos now allow attackers to convincingly clone voices and features, making even security-conscious organizations susceptible to manipulation.
AI Systems Under Attack
AI systems and large language models (LLMs) face their own threats. Techniques like prompt injection and data poisoning have attackers probing for weaknesses as aggressively as they once targeted network vulnerabilities. It’s the initial stages of cloud adoption all over again, but with significantly higher stakes.
A particularly concerning trend is jailbreaking—where attackers use deceptive inputs to trick AI systems into doing or sharing things they shouldn’t. The latest research by CyberArk Labs shows how attackers could work to systematically bypass LLM security filters and their alignment, and even automate jailbreaks across multiple LLM models. Threat research like this is vital for improving the safety and reliability of AI systems.
Keeping an Eye on the Ball
Enterprise security is a moving target. With AI reshaping both defensive and offensive landscapes, defenders find themselves under immense pressure. However, by keeping a laser focus on securing each identity with appropriate privilege controls and prioritizing prevention, security leaders can stay resilient—and ready for whatever comes next.
Omer Grossman is the global chief information officer at CyberArk. You can check out more content from Omer on CyberArk’s Security Matters | CIO Connections page.