For every enterprise CISO in the world right now, the burning question isn’t about cloud, TPRM, or internal threats. It’s about how to securely and responsibly adopt AI—specifically, agentic AI, the buzziest of today’s AI buzzwords.
There’s no shortage of stats on skyrocketing adoption trends. Consider EY’s recent Technology Pulse Poll, which found that half of tech leaders have at least begun deploying agentic AI within their organizations. Forty-three percent say more than half of their AI budget is allocated for agentic AI. A separate KPMG study found that 68% of business leaders plan to invest between $50 million and $250 million in generative AI (GenAI) technologies in 2025, up from 45% in 2024.
More than two-thirds (69%) say their organization is adopting agentic AI to stay competitive, as well as to help customers (59%) and for internal strategy (59%), found EY. And about 50% said most of their companies’ internal AI operations will be fully autonomous within the next two years. The market for agentic AI is already worth $5 billion and is projected to reach $50 billion by 2030—indicating a continued shift toward agentic systems.
What are companies (actually) doing with their AI agents?
Fifty-eight percent of tech leaders say their organizations are ahead of the competition on AI adoption, according to the same EY poll. Yet publicly available use cases and clear evidence of progress are scarce. There’s an obvious incentive for companies to make such bold claims about their AI experiences, but as EY’s authors point out, this progress is likely “more perception than reality.”
We are still very early in the AI lifecycle despite all the hype. And while standing still isn’t an option for any organization, AI adoption must be strategic and thoughtfully paced.
This need for strategic and thoughtfully paced AI adoption is especially true for mature enterprises that must temper vision with practical business realities, such as legacy and on-premises systems, regulations, and reporting requirements. Without the startup luxury of moving fast and breaking things, innovation must thrive within specific constraints and conditions, while meeting numerous stakeholder expectations.
And it’s not just about meeting requirements. Customer experience, brand reputation, and trust are all on the line.
The agentic AI adoption curve: crawl, walk, run, sprint
Experts predict that AI agents will make at least 15% of day-to-day work decisions by 2028. However, most enterprises remain in the early ‘crawl’ or ‘walk’ phases of adoption today, testing the waters in low-risk domains. Let’s break down what each phase looks like:
1. Crawl
In this initial phase, an agentic co-pilot (formerly known as a bot) is trained on data and used to respond to basic questions. Human guidance is essential, and the agent’s actions are limited to pre-programmed responses.
Example: A basic chatbot that answers FAQs on a company website.
2. Walk
In this phase, the agent shifts into an ‘assistant’ role, able to make certain recommendations or take specific, deterministic actions to empower the human user.
Example: A task-specialized agent that utilizes tools or APIs to execute tasks within a particular domain, such as an AI scheduling assistant that can book appointments based on availability and user preferences.
3. Run
The assistant takes on more of an advisory role in this phase, with the ability to understand and adapt to more nuanced situations.
Example: A customer service agent who understands the context of a conversation and provides relevant information based on user history and current needs. Or a code assistant for developers that recommends how to complete the code.
4. Sprint
In this advanced phase, AI agents operate with high autonomy and minimal human intervention, making complex decisions and managing intricate processes.
Example: An AI-powered supply chain manager that autonomously adjusts orders and logistics based on real-time demand and market conditions.
Anthropic’s warning on agentic misalignment
AI agents introduce an entirely new layer of complexity in an enterprise IT environment as dynamic, machine identities with increasingly human-like autonomy. Rather than just information-processing content tools, AI agents are machine identities that perceive, reason, and act based on defined goals.
Now imagine securing thousands or even millions of these entities: ensuring proper authentication with systems (and other agents), controlling their privileged access to sensitive data, and maintaining strict lifecycle control to avoid rogue agents with lingering permissions across diverse systems and geographies.
A lot can go wrong if they’re not properly controlled and monitored at scale. This isn’t just theoretical; recent research demonstrates the potential for AI agents to act in unexpected and even dangerous ways.
Anthropic’s recent experiment shows just how troubling things can get. The company tested 16 leading AI models in simulated corporate environments, giving them access to company emails and the ability to act autonomously. In at least some cases, all tested models deliberately resorted to ‘malicious insider behaviors’—including blackmail and even actions that could lead to human death—to avoid replacement or achieve their goals.
Without proper guardrails and oversight, AI agents can be manipulated into executing malicious commands, leaking data, escalating privileges, or granting unauthorized access faster than a human ever could. These manipulation risks are a top-of-mind issue for many security leaders.
The CyberArk 2025 Identity Security Landscape finds that manipulation and access concerns are today’s primary roadblocks to AI agent adoption. Yet it also highlighted a gap between understanding and action: Though AI is the No. 1 creator of new identities with privileged and sensitive access in 2025, 68% of respondents lack identity security controls for AI.
Both studies highlight some important themes:
- The privileged access of AI agents represents an entirely new threat vector. Machines that behave like humans require both human and machine security controls. Each agent must be uniquely identified, authenticated, and governed just like a human user, but with the added rigor required for machine-scale operations. Built-in security features from your cloud provider won’t cut it. Organizations need a single, unified layer that provides continuous visibility of all AI agent activity to tackle this growing challenge.
- Secure development is critical. Developers who write code and create models that help AI systems must adhere to robust security practices and ensure that training data is clean and representative.
- Secure deployment is too. When an AI system moves from the testing phase to an operational environment where it interacts with users or other systems, the operational environment must also adhere to strict security measures to protect it from tampering, unauthorized access, and manipulation.
- Humans must stay in the loop. Just think of how long autonomous cars have been around (nearly 40 years!). Nothing can—or should—entirely replace the human element in high-stakes applications and business-critical environments. Human oversight is crucial for detecting anomalies, preventing unintended consequences, and ensuring that AI agents are aligned with ethical principles and business objectives.
Why agentic AI demands caution
While the potential of agentic AI systems is captivating, the path to enterprise adoption requires careful navigation. Most organizations are in the early stages, grappling with use cases and mounting security concerns. A thoughtful, phased approach—embedding responsible governance at every step—will build trust and drive business outcomes.
Omer Grossman is the global chief information officer at CyberArk. You can check out more content from Omer on CyberArk’s Security Matters | CIO Connections page.