Securing AI agents: privileged machine identities at unprecedented scale

October 8, 2025 Venu Shastri

securing-ai-agents-privileged-machine-identities

Earlier in 2025, an AI agent named Claudius made headlines when it insisted it was human, promising to deliver products in “a blue blazer and red tie.” Quirky? Sure. But beneath the strange admission sat a more important truth: today’s AI agents aren’t just chatbots with puppet-like ambitions, whose untruths would be betrayed by a growing nose.

They’ve evolved into actors with real credentials, access, and autonomy.

For enterprises, autonomous AI agents can deliver undeniable value, updating customer records, generating reports, moving files between systems, and even provisioning cloud resources—all at machine speed. And companies are adopting them just as rapidly.

According to PwC’s 2025 “AI Agent Survey,” 79% of companies already deploy agentic AI, with two-thirds reporting measurable productivity gains. Three-quarters of executives even said AI agents will reshape the workplace more than the internet.

However, the very features that make AI agents so powerful—speed, autonomy, and deep integration into enterprise systems—also amplify their risks. The 2025 CyberArk Identity Security Landscape found that machine identities outnumber humans by more than 80 to 1, yet 68% of organizations lack identity security controls for their AI systems.

When you add AI agents into the picture, thousands of new digital workers operate with elevated access and act without human oversight. The risks are significant, but you can manage them if you view AI agents for what they really are: privileged machine identities.

The overlooked risks of privileged machine identities

AI agents are commonly given elevated standing access and are specifically designed to act independently. But the gap between adoption and security is already stark:

  • 88% of organizations define “privileged” identities as human only.
  • 42% of machine identities already touch sensitive data.

These statistics paint a troubling picture of overlooked privilege risk in machine identity security. They show us why we must recognize AI agents as a “new” form of privileged identity—human-like in capability but operating at the unprecedented volume, variety, and velocity of machines.

Why AI agents are privileged machine identities

AI agents inherit many of the same risks as other machine identities: excessive permissions, stolen credentials, and secrets leakage—all challenges for security teams.

What’s different is the non-deterministic, human-like behavior of agentic AI. While authentication via OAuth tokens and consent flows would work when AI agents perform specific tasks on behalf of a person, that will only cover a sliver of use cases. Autonomous AI agents and multi-agent systems will continue to become more prevalent, and these more complex use cases will function similarly to cloud native workloads, relying on machine identities like secrets, API keys, and certificates. And, to deliver their true value, AI agents require elevated access across SaaS platforms, databases, and cloud environments.

Because while they do read data, they also behave dynamically, making decisions in real time to find the best way to execute on assigned goals.

While these levels of autonomy allow AI agents to reason and operate with minimal human oversight, they can also enable cascading failures with relative ease.

Validating AI agent risks with OWASP’s Top 10 for LLMs

The security community is only beginning to codify frameworks for agentic AI security, and the specifics will continue to evolve alongside the technology itself. OWASP’s Top 10 for Large Language Models (LLMs) is one of the first attempts to systematize AI risk models, and we can apply some of them directly to AI agents.

For example:

  • Privilege abuse: An AI agent with excessive permissions approves a financial transfer or exposes sensitive records.
  • Tool misuse: Attackers manipulate agents into misusing legitimate integrations, turning business functions like CRM access or cloud storage into attack vectors.
  • Memory poisoning: Threat actors feed malicious inputs into an agent’s context so that its decisions become skewed over time, leading it to produce flawed outputs or act in ways that undermine security.

These are familiar identity security challenges magnified by AI agent capabilities.

While agentic AI itself is a nascent space, and standardized frameworks will continue to develop, one principle is already clear: identity must sit at the foundation of agentic AI security.

AI agents autonomy

Identity security guardrails for the autonomous AI workforce

How should organizations secure these new workforce entities? The answer is to extend your identity-first security practices to AI agents, treating them the same way you would other privileged machine identities—through a Zero Trust, identity-first approach.

That means:

  • Know your agents: As with human employees, agents must be rigorously discovered, onboarded, and decommissioned. Taking these steps helps prevent unmanaged or orphaned AI agents from operating in the shadows.
  • Monitor behavior dynamically. Real-time policy enforcement, session monitoring, and isolation can help detect anomalies and rogue behaviors as they happen, keeping pace with the speed of agentic AI activity.
  • Control access with precision: Just-in-time (JIT) access, zero standing privileges (ZSP), and scoped entitlements can reduce over-permissioning and limit the potential blast radius.

The goal isn’t to slow AI agents down with cumbersome oversight but to confidently enable autonomy by deploying effective guardrails.

Scaling identity-first security for AI agents

Identity-first guardrails only work if they scale alongside AI agents themselves. That means moving beyond static rules to dynamic, context-aware controls: privileges that flex in real time based on roles and intent.

For today’s enterprise, it also means securing human, machine, and AI agent identities with similar levels of rigor.

How to safely adopt agentic AI without slowing innovation

AI agents are already on the job for many organizations, acting like human employees and reshaping how enterprises operate. They’re new workforce entities that act as privileged machine identities, operating at enterprise scale and machine speed. They amplify risks that security leaders already understand, while introducing new twists like hallucinations and rogue behavior.

The security frameworks for this space may still be developing, and the details will continue to evolve. But identity security is the foundation that organizations must build on now. Those who extend proven controls to AI agents can better position themselves to embrace newfound levels of autonomy—and productivity—with confidence.

Venu Shastri is the senior director of product marketing for AI solutions and platforms at CyberArk.

Glimpse the future: On Nov. 4, join us for “Securing the Next Frontier of Agentic AI,” where CyberArk will unveil a new approach to safeguarding AI agents without slowing innovation.

No Previous Articles

Next Article
AI agents in financial services: The hidden org chart
AI agents in financial services: The hidden org chart

Do you know who’s really working for your bank, and whether they’re quietly rewriting your org chart behind...