
When ransomware drove record losses, insurers began scrutinizing basic controls like multi-factor authentication (MFA), backups, and endpoint detection. Now, AI-driven automation is introducing a new category of risk—AI agents—and insurers are responding with heightened attention to privilege management.
AI agents are non-human identities that can approve payments, access sensitive data, and execute commands using powerful API keys. Yet 88% of organizations still define “privileged user” as human-only, leaving these high-impact identities significantly undermanaged.
While formal coverage for agentic AI is still in its infancy, insurers and regulators are already questioning how organizations will govern these entities.
Moving beyond ransomware response to operational resilience for AI agents
When ransomware made cyber insurers rethink how they underwrite risk, policies became contingent on proof—MFA, reliable backups, and endpoint detection were must-haves. Now, the conversation is expanding beyond ransomware response to the integrity of everyday operations.
As automation becomes more pervasive, underwriters don’t just want to know whether you can recover from an incident; they want to know how well you can prevent one. AI agents have become a proving ground for that shift. Each agent carries credentials, tokens, or API keys that can open critical systems—often with minimal oversight. Even if an agent is compromised, it can still approve payments, alter data, or unintentionally leak intellectual property, and regulators are already warning that accountability for these actions will rest with the organizations that deploy them.
The lesson here is that insurers are shifting their lens from incident recovery to proactive governance. Privilege management for AI agents is becoming the new test of operational resilience.

Why AI agents and machine identities are driving premium pressure
The rapid proliferation of machine identities is fundamentally changing the risk landscape for cyber insurers. AI is now the fastest creator of new privileged identities, with machine identities outnumbering humans by a ratio of 82:1. AI agents will rapidly push this ratio even higher. Every single AI agent represents a bundle of credentials, permissions, and decision logic that can expand or collapse risk depending on how it’s governed. And unlike traditional static applications, AI agents make independent decisions, which makes them both more sensitive and more privileged by design.
Yet, most organizations still treat these entities as tools rather than identities, leaving them outside the standard controls used for people. It only takes one overlooked credential to show how privilege, not intent, defines risk.
Picture this: an AI agent, authorized to process vendor payments, is hijacked through a stolen access token. This scenario isn’t ransomware or a disgruntled insider—just an autonomous process executing exactly what its credentials allow. Attackers can exploit its identity to execute a fraudulent transfer, a data leak, or an unauthorized system change that looks legitimate on every audit trail.
Situations like this underscore why insurers, regulators, and boards are paying closer attention to non-human privilege. The good news is that there is a playbook: privileged access management (PAM), secrets vaulting, and session monitoring can treat AI agents as privileged users, assigning them traceable accounts, enforcing least privilege, and rotating their tokens as frequently as human passwords.
How privilege controls address the AI governance gap
AI-specific insurance products have begun to emerge, such as Lloyd’s offering for chatbot errors and Armilla’s coverage for model-reliability failures; however, dedicated frameworks for agentic AI privileges are still scarce. Yet, organizations cannot afford to wait for insurance policy language to catch up with the pace of AI adoption.
Rather than waiting for policy language to catch up, forward-thinking CISOs are focusing on control maturity: classifying every AI and service identity, eliminating standing access, and building continuous verification into automation pipelines. This approach means fewer surprises when insurers and regulators ask how you’re governing these privileges.
Key questions insurers are asking about AI agent privileges
As insurers encounter new forms of AI-driven exposure, traditional coverage definitions start to blur. Unauthorized access may still fall under standard cyber policies, but agentic AI missteps—such as hallucinated actions, unintended code execution, or autonomous privilege escalation—now sit in gray areas. These edge cases could fall under cyber, crime, Errors and Omissions Insurance (E&O Insurance), or be excluded entirely by emerging “absolute AI” clauses.
Already, we’re seeing increased scrutiny on machine identities as a whole. AI agents will only accelerate this trend, introducing a new wave of autonomous, privilege-bearing identities that demand tighter oversight and more precise policy language. As AI expands from static models to self-directed systems, the boundary between user behavior and system behavior collapses. This blurring of boundaries forces insurers to reexamine how risk is attributed, modeled, and underwritten.
To manage that ambiguity, underwriters are modernizing their questionnaires to expand beyond surface-level controls toward deeper questions about governance, operations, and monitoring. They now want to know:
- Do you have a current inventory of all AI agents, including their goals, functions, and ownership?
- Do AI agents and other non-human identities have unique identities with least-privilege access enforced?
- How are authentication methods (such as API keys, certificates, tokens, and secrets) vaulted, rotated, and revoked?
- Do you log and monitor all agent actions in real time? Can you trace sessions back to accountable owners?
- What happens if an agent behaves unexpectedly or outside its intended scope—do you have a rapid shutdown or “kill switch” process?
- Do you restrict which sensitive data agents can interact with and monitor for misuse?
- Do you have an incident response plan for AI agent compromise or malfunction, including alerting, containment, and recovery steps?
- What’s your policy for governing shadow AI? Are the tools your workforce may spin up outside security’s view?
In short, the conversation with insurers is shifting from “Do you have controls?” to “Can you prove they work?” Organizations that can answer these questions with evidence—session logs, rotation schedules, audit trails—signal that privilege risk is being actively managed, not merely acknowledged.
Demonstrating identity security control maturity
Privilege discipline isn’t a new idea, but AI agents make it urgent. Each automated process carries credentials and entitlements that can expand or collapse risk depending on how it’s governed. For CISOs, the goal is to transform static controls into living evidence of risk reduction—controls that can be monitored, rotated, and reported on in real time.
Start with identity parity: extend strong authentication, PAM, and secrets vaulting to every machine identity, not just people. Classify AI agents by sensitivity and remove standing access wherever possible. Continuous auditing—through logs, behavioral analytics, and automated session recording—will turn those controls into data that insurers can trust.

Equally important is the narrative you bring to renewal discussions.
Underwriters are moving from checklists to telemetry; they want proof that identity risks are visible, measurable, and improving. Building a concise, “insurer-ready” story—complete with metrics, roadmaps, and evidence of tabletop testing—helps translate security investment into tangible risk reduction.
The ability to produce clear, auditable evidence of control maturity is what will ultimately separate organizations that merely claim readiness from those that can prove it.
Yuval Moss, CyberArk’s vice president of solutions for Global Strategic Partners, joined the company at its founding.
🎧 Listen Now: Explore the risks and realities of agentic AI with Yuval Moss, the author of this blog, in the Security Matters podcast episode, “The humanity of AI agents: Managing trust in the age of agentic AI.” Discover how AI agents are reshaping enterprise security, why identity and Zero Trust are critical, and what leaders need to know about autonomous systems. From rogue agents to privilege escalation, gain practical steps to manage AI agent identities and prepare for the next wave of innovation. Available below and on most podcast platforms.





















