Threat Research Blog

  • Unlocking New Jailbreaks with AI Explainability

    Unlocking New Jailbreaks with AI Explainability

    TL;DR In this post, we introduce our “Adversarial AI Explainability” research, a term we use to describe the intersection of AI explainability and adversarial attacks on Large Language Models...

    Read Blog
  • CyberArk Named a Leader in the 2024 Gartner® Magic Quadrant™ for Privileged Access Management – again.

    View the Report
  • Agents Under Attack: Threat Modeling Agentic AI

    Agents Under Attack: Threat Modeling Agentic AI

    Introduction The term “Agentic AI” has recently gained significant attention. Agentic systems are set to fulfill the promise of Generative AI—revolutionizing our lives in unprecedented ways. While...

    Read Blog
  • Jailbreaking Every LLM With One Simple Click

    Jailbreaking Every LLM With One Simple Click

    In the past two years, large language models (LLMs), especially chatbots, have exploded onto the scene. Everyone and their grandmother are using them these days. Generative AI is pervasive in...

    Read Blog
  • Captain MassJacker Sparrow: Uncovering the Malware’s Buried Treasure

    Captain MassJacker Sparrow: Uncovering the Malware’s Buried Treasure

    Cryptojacking malware—a type of malware that tries to steal cryptocurrencies from users on infected machines. Curiously, this kind of malware isn’t nearly as famous as ransomware or even...

    Read Blog
  • Let’s Be Authentik: You Can’t Always Leak ORMs

    Let’s Be Authentik: You Can’t Always Leak ORMs

    Introduction Identity providers (IdPs) or Identity and Access Management (IAM) solutions are essential for implementing secure and efficient user authentication and authorization in every...

    Read Blog
  • How Secure Is Your OAuth? Insights from 100 Websites

    How Secure Is Your OAuth? Insights from 100 Websites

    You might not recognize the term “OAuth,” otherwise known as Open Authorization, but chances are you’ve used it without even realizing it. Every time you log into an app or website using Google,...

    Read Blog
  • Teach Yourself Kubiscan in 7 Minutes (or Less…)

    Teach Yourself Kubiscan in 7 Minutes (or Less…)

    While Kubernetes’ Role-based access control (RBAC) authorization model is an essential part of securing Kubernetes, managing it has proven to be a significant challenge — especially when dealing...

    Read Blog
  • ByteCodeLLM – Privacy in the LLM Era: Byte Code to Source Code

    ByteCodeLLM – Privacy in the LLM Era: Byte Code to Source Code

    TL;DR ByteCodeLLM is a new open-source tool that harnesses the power of Local Large Language Models (LLMs) to decompile Python executables. Furthermore, and importantly, it prioritizes data...

    Read Blog
  • White FAANG: Devouring Your  Personal Data

    White FAANG: Devouring Your Personal Data

    Generated using Ideogram Abstract Privacy is a core aspect of our lives. We have the fundamental right to control our personal data, physically or virtually. However, as we use products from...

    Read Blog
  • Discovering Hidden Vulnerabilities in Portainer with CodeQL

    Discovering Hidden Vulnerabilities in Portainer with CodeQL

    Recently, we researched a project on Portainer, the go-to open-source tool for managing Kubernetes and Docker environments. With more than 30K stars on GitHub, Portainer gives you a user-friendly...

    Read Blog
  • Anatomy of an LLM RCE

    Anatomy of an LLM RCE

    As large language models (LLMs) become more advanced and are granted additional capabilities by developers, security risks increase dramatically. Manipulated LLMs are no longer just a risk of...

    Read Blog
  • A Security Analysis of Azure DevOps Job Execution

    A Security Analysis of Azure DevOps Job Execution

    In software development, CI/CD practices are now standard, helping to move code quickly and efficiently from development to production. Azure DevOps, previously known as Team Foundation Server...

    Read Blog
  • AI Treason: The Enemy Within

    AI Treason: The Enemy Within

    tl;dr: Large language models (LLMs) are highly susceptible to manipulation, and, as such, they must be treated as potential attackers in the system. LLMs have become extremely popular and serve...

    Read Blog
  • A Brief History of Game Cheating

    A Brief History of Game Cheating

    Over the short span of video game cheating, both cheaters and game developers have evolved in many ways; this includes everything from modification of important game variables (like health) by...

    Read Blog
  • Double Dipping Cheat Developer Gets Caught Red-Handed

    Double Dipping Cheat Developer Gets Caught Red-Handed

    Following our post “A Brief History of Game Cheating,” it’s safe to say that cheats, no matter how lucrative or premium they might look, always carry a degree of danger. Today’s story revolves...

    Read Blog
  • Identity Crisis: The Curious Case of a Delinea Local Privilege Escalation Vulnerability

    Identity Crisis: The Curious Case of a Delinea Local Privilege Escalation Vulnerability

    During a recent customer engagement, the CyberArk Red Team discovered and exploited an Elevation of Privilege (EoP) vulnerability (CVE-2024-39708) in Delinea Privilege Manager (formerly Thycotic...

    Read Blog
  • How to Bypass Golang SSL Verification

    How to Bypass Golang SSL Verification

    Golang applications that use HTTPS requests have a built-in SSL verification feature enabled by default. In our work, we often encounter an application that uses Golang HTTPS requests, and we have...

    Read Blog
  • The Current State of Browser Cookies

    The Current State of Browser Cookies

    What Are Cookies When you hear “cookies,” you may initially think of the delicious chocolate chip ones. However, web cookies function quite differently than their crumbly-baked counterparts....

    Read Blog
  • You Can’t Always Win Racing the (Key)cloak

    You Can’t Always Win Racing the (Key)cloak

    Web Race Conditions – Success and Failure – a Keycloak Case Study In today’s connected world, many organizations’ “keys to the kingdom” are held in identity and access management (IAM) solutions;...

    Read Blog
  • Operation Grandma: A Tale of LLM Chatbot Vulnerability

    Operation Grandma: A Tale of LLM Chatbot Vulnerability

    Who doesn’t like a good bedtime story from Grandma? In today’s landscape, more and more organizations are turning to intelligent chatbots or large language models (LLMs) to boost service quality...

    Read Blog
  • loading
    Loading More...