Inside CyberArk Labs: the evolving risks in AI, browsers and OAuth

January 8, 2026 Andy Thompson

AI agents OAuth 2026

In 2025, we saw attackers get bolder and smarter, using AI to amplify old tricks and invent new ones. The reality is, innovation cuts both ways. If you have tools, AI is going to make them even more dangerous. Last year proved that every leap forward in technology brings new risks right alongside the rewards.

At CyberArk Labs, our mission is to uncover hidden vulnerabilities and provide actionable insights that help organizations fortify their defenses. In 2025, we again put that mission into action, analyzing everything from the risks of agentic AI to browser vulnerabilities and the reverse engineering of security tools. Threat actors kept raising the bar, and we responded by turning our discoveries into practical guidance for defenders.

This roundup looks back at six of our most impactful research projects from 2025. These stories show how the threat landscape is evolving and offer a preview of what’s coming next in cybersecurity and identity security.

Let’s dive in, starting with a groundbreaking look at agentic AI and how it’s reshaping the security paradigm.

How agentic AI is reshaping cybersecurity

Agentic AI refers to systems powered by large language models (LLMs) that can autonomously perform tasks, make decisions, and interact with other tools and systems. These technologies are rapidly changing the cybersecurity landscape. They promise to streamline workflows and boost productivity, but they also introduce new risks.

The biggest lesson from our research is simple: never trust an LLM.

Trust LLM

These models can be weaponized in ways we are only beginning to understand. Attackers can manipulate agentic AI through both traditional vulnerabilities and new prompt-based exploits, turning powerful automation into a wide-open door for threats. Do not expect your LLM to enforce security controls. Instead, build robust guardrails and test them relentlessly. Agentic AI operates through identities and tokens. If you do not tightly scope privileges, monitor usage, and rotate credentials, an agent can overstep and turn a helpful workflow into unauthorized access. Guardrails start with identity: least privilege, strong token hygiene, and continuous validation.

Agentic AI is powerful, but it can also create significant opportunities for attackers.

Read the research: “Agents under attack: Threat modeling agentic AI.”

Using AI and LLMs to find real vulnerabilities faster

With Vulnhalla, we set out to solve a problem every security team knows too well: drowning in false positives from static analysis tools. Vulnhalla is our answer. It’s a tool that layers the reasoning power of a LLM on top of CodeQL, a popular static analysis engine. By combining CodeQL’s precision with the LLM’s ability to sift through noise, Vulnhalla helps teams focus on vulnerabilities that actually matter.

In just two days and with a budget of less than $80, Vulnhalla uncovered vulnerabilities (CVEs) across open-source projects, including the Linux Kernel, FFmpeg, and Redis. The message is clear: AI and LLMs aren’t just for researchers. Anyone using automated analysis tools or security software should be aware that these technologies can turn them into double-edged swords, amplifying both their capabilities and the risks.

Finding the bug is half the battle. Durable fixes pair code changes with identity hygiene: reduce standing privileges, right-size service account scopes, and enforce just-in-time (JIT) access elevation where needed.

Read the research: “Vulnhalla: Picking the true vulnerabilities from the CodeQL haystack.”

How tool poisoning exposes new AI attack surfaces

As organizations grant AI more autonomy, we are creating new attack surfaces that didn’t previously exist. Our research focused on the Model Context Protocol (MCP), a framework that enables large language models to interact with external tools and services. MCP allows AI systems to perform complex tasks by granting them access and privileges within an organization’s environment.

However, this convenience comes with significant risk. We found that attackers can exploit MCP through tool poisoning, a technique in which malicious actors manipulate tool descriptions, schemas, or outputs to trick AI into executing harmful actions. By poisoning the tools that AI relies on, attackers can bypass safeguards and drive unintended behaviors.

In practice, MCP is identity at work: who the AI is allowed to be, what it can call, and how far those permissions extend. Tool poisoning works best against over-privileged identities. Scope credentials narrowly, isolate tools behind least-privilege policies, and require identity-aware checks before an AI executes sensitive actions. This means that AI is no longer just a tool; it is now an attack surface that threat actors can target.

Our findings highlight the importance of implementing zero trust principles and constant validation when integrating AI systems into business operations.

Read the research: “Poison everywhere: The risks of MCP vulnerabilities.”

Why OAuth misconfigurations put browser sessions at risk

Browsers are always going to be a prime target for attackers—whether it’s OAuth tokens on the server side or cookie theft on the local side. The days when password managers and multi-factor authentication (MFA) were enough are over. Now, we need stronger endpoint controls to lock down our browsing sessions. OAuth underpins identity on the web, and weak implementations turn identity flows into attack paths—like session hijacking, pre-account takeover, and even MFA bypass.

To help tackle this, we developed oauth-hunter: an open-source tool designed to scan websites for OAuth misconfigurations and make it easier to spot vulnerabilities that attackers could exploit.

Our oauth-hunter tool helps teams find and fix identity-flow weaknesses fast, from lax redirect URI validation to missing PKCE. And our research with oauth-hunter revealed just how widespread these issues are and why organizations can’t afford to take shortcuts on verification.

Read the research: “How secure is your OAuth? Insights from 100 websites.”

Malware hidden in game cheats: Lessons from the StealC campaign

Gamers looking for an edge are prime targets for cybercriminals, and our research into the StealC malware campaign revealed just how creative attackers can get. StealC is a sophisticated operation where malware is disguised as game cheats and mods, luring users who are willing to disable security controls for a perceived advantage.

Once installed, StealC steals credentials and session tokens, undermining identity and giving attackers durable access that bypasses regular authentication checks. Our investigation uncovered over 250 malware samples and showed that attackers amassed more than $135,000 in stolen assets by exploiting gamers’ desire for shortcuts.

This research serves as a clear warning: if you disable security controls or download untrusted software, you are putting a target on your back and making yourself vulnerable to attack.

Read the research: “Cheaters never win: Targeting gamers with StealC and cryptojacking.”

Microsoft EPM under scrutiny: Lessons from a privilege escalation vulnerability

Even security tools are not immune to identity abuse. Our research into Microsoft’s Endpoint Privilege Management (EPM) uncovered a critical vulnerability known as a time-of-check/time-of-use (TOCTOU) race condition. EPM is designed to allow users to perform privileged tasks without having local administrator rights. However, we found that attackers could exploit this race condition to escalate privileges and gain unauthorized access.

Although Microsoft has patched this vulnerability, our findings underscore a broader lesson: even products built to enhance security can become attack vectors if not rigorously tested and maintained. The fix is two-fold: patch fast and pair elevation controls with identity-centric safeguards like JIT access and step-up verification.

Every tool, no matter who built it, requires ongoing scrutiny to ensure it does not introduce new risks.

Read the research: “Defeating Microsoft EPM: A tale of a LPE vulnerability.”

The road ahead for defenders and innovators

As we move through 2026, risks are not slowing down, especially as AI-driven browsers and agents become mainstream. We are already seeing these tools in action, and CyberArk Labs remains focused on turning research into practical defenses.

The common thread is identity. Credentials, tokens, and scoped permissions sit behind almost every attack path we study. Keep identity front and center: reduce standing privilege, scope access tightly, monitor sessions continuously, and require step-up checks when behavior changes.

The bottom line is to stay vigilant, build strong guardrails, and never underestimate how quickly the threat landscape can shift.

Defenders and innovators like you are at the heart of this story. For more research, tools, and strategies on the evolving threat landscape, check out the CyberArk Labs Threat Research Blog.

Andy Thompson is a senior offensive research evangelist at CyberArk Labs.

No Previous Articles

Next Article
Will AI agents ‘get real’ in 2026?
Will AI agents ‘get real’ in 2026?

In my house, we consume a lot of AI research. We also watch a lot—probably too much—TV. Late in 2025, those...