Imagine a sleek, high-tech sports car racing downhill without brakes. Now, imagine that car is actually the AI driving your business. Powerful yet precariously close to catastrophe. That’s why, as we accelerate AI adoption, including AI agents, we can’t afford to overlook security guardrails.
This fact was front and center during the recent “large-scale cyberattack” on DeepSeek, a strategic open-source AI player from China that’s been disrupting the global AI space.
Becoming known colloquially as the “ChatGPT Killer” for its cutting-edge R1 and V3 large language models (LLMs), DeepSeek is challenging the dominance of AI giants like OpenAI, even overtaking ChatGPT in the App Store on Jan. 27.
As a result, tech stocks otherwise buoyed by AI optimism and promises of lighter-touch White House regulations stumbled, with notable names like NVIDIA, Microsoft and Google—along with several others in tangentially related infrastructure and energy sectors—feeling the ripple effects in the market as this smaller, more cost-effective model took the world by storm.
However, shortly after its meteoric rise, DeepSeek’s claim to fame was dramatically interrupted when a distributed denial-of-service (DDoS) attack put its API and web chat platform in threat actors’ crosshairs. DeepSeek had to halt new user registrations, and the series of events sent further shockwaves through the tech sector, causing many to question whether proprietary AI juggernauts can maintain their market dominance.
The DeepSeek incident isn’t an isolated glitch in the AI ecosystem, as OpenAI—to name another—has been the target of its own cybersecurity incidents. But this instance does serve as a stark example of what can happen when innovative AI systems lack the safety nets needed to contain cybersecurity crises. And this issue doesn’t just stop at LLMs that write poems. Nor is it just about open-source versus paid models.
No, in a world where AI agents are poised to operate many business tasks, incidents like these show AI systems are vulnerable to threats that already exist.
The solution? It’s time to have a serious conversation about implementing what I like to call an AI kill switch. In other words, we must protect AI—especially agentic AI—systems through machine identity controls. Machine identity is the ultimate AI agent kill switch.
The Double-Edged Sword of AI
AI has become a force of transformation, delivering breakthroughs in sectors like healthcare, finance, logistics and marketing—and agentic AI is set to make LLMs take action and do work. But all this dramatic progress comes with significant risk. Just as AI speeds up innovation, it also supercharges the capabilities of cybercriminals.
In the case of DeepSeek, several researchers have already identified vulnerabilities in its systems, including jailbreaks that enable malicious outputs like ransomware and even toxin development instructions, according to Forbes. Meanwhile, on Jan. 29, another research team discovered an exposed ClickHouse database leaking sensitive data, including user chat history, log streams, API secrets and operational details. They also discovered that this leakage allowed for complete control of the database, as well as privilege escalation—without authentication.
These are just a few examples of what NIST categorizes as Adversarial Machine Learning (AML) attacks, which exploit weaknesses in AI models during development, testing and deployment. Another example of categorization is the OWASP Top 10 for LLMs, and both lists denote tactics like evasion, poisoning, privacy breaches and infrastructure assaults as significant areas of concern for AI security. This list also includes the potential for hackers to exploit APIs and gain unauthorized access to models and their capabilities.
These inherent risks are especially worrisome for security teams, and this sentiment was also shared in the Biden administration’s final cybersecurity executive order in early January.
We can also consider the figures from our 2024 research:
- 92% of security leaders have concerns about the use of AI-generated code.
- 77% of global security leaders worry about data poisoning, where attackers manipulate training data to skew AI outputs.
- 75% are deeply concerned about AI model theft.
These apprehensions aren’t going away, given the astounding level of attention that DeepSeek has received in the last week-plus alone. Because as this case showed, effective attacks don’t just disrupt operations. They have potential global ramifications.
That’s where the concept of an AI kill switch comes in.
What is an AI Kill Switch—and Why Do We Need It?
AI presents significant potential to transform our world positively, but it must be protected. Whether it’s an attacker sneaking in and corrupting or even stealing a model, a cybercriminal impersonating AI to gain unauthorized access or some new form of attack we haven’t even thought of yet, security teams need to be on the front foot. This is why a “kill switch” for AI—based on the unique identity of individual models being trained, deployed and run—is more critical than ever.
Although the term might evoke images of a big red button hidden deep within your IT department, it’s actually much more powerful. It’s about creating mechanisms to pause, contain or disable AI systems—especially AI agents—when they behave unexpectedly or come under attack.
Think of it like the circuit breakers on power grids: protocols designed to prevent total collapse during overloads or missteps, such as an AI system generating harmful outputs or being exploited in real time by attackers.
Or, put another way, it’s like the brakes on that sports car mentioned earlier.
Without a kill switch, incidents like the DeepSeek breach can spiral out of control, especially since AI—like AI agents—is being so deeply integrated across businesses and workflows.
An effective AI kill switch can:
- Stop ongoing threats: Instantly terminate compromised AI systems to halt adversaries in their tracks.
- Protect sensitive data: Cut off access to anything sensitive before attackers can extract or manipulate it.
- Prevent escalation: Isolate threats to stop them from spreading through broader networks or supply chains.
The Missing Puzzle Piece: Machine Identity Security
How do you develop your “AI kill switch?” The key lies in securing the entire machine-driven ecosystem within which an AI operates. Machine identities—like digital certificates, access tokens and API keys—authenticate and authorize AI functions and their abilities to interact with and access data sources. Simply put, LLMs and other AI systems are code, and code needs constant verification to prevent unauthorized access or rogue behavior.
Otherwise, if attackers compromise these identities, AI systems can become puppets under their control, building ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security programs are designed to ensure the trustworthiness of AI systems, even as AIs scale to interact with complex networks and user bases—tasks that can and will be done autonomously via AI agents.
Without clear governance or oversight, companies are left flying blind, and attackers can take advantage of this confusion, exploiting everything from data poisoning to backdoor vulnerabilities and beyond—a trend bubbling up faster than many realize.
Building an AI-Safe Future
To ensure we can safely utilize AI systems—including agentic AI—we must strike a balance between functionality and resilience. For example, NIST denotes specific tradeoffs among privacy, accuracy and robustness.
Further recommendations:
1. Granular control over machine identities: Ensure that all machine identities tied to AI systems, such as certificates, are verifiable, up-to-date and continually monitored to prevent unauthorized access. And most importantly, each AI agent in operation must have a unique identity—agents cannot share credentials or impersonate humans.
2. Real time monitoring: Integrate tools capable of identifying deviations or unusual behavior in AI models. Look for solutions that can provide actionable insights to help teams stay ahead of attackers.
3. Automatic rollback mechanisms: Provide your organization the ability to revert AI systems to their last known safe states in the wake of suspicious activity, such as reverting a compromised chatbot to a pre-attack configuration.
4. Cross-functional response protocols: Build multi-disciplinary response teams capable of “flipping” these kill switches and analyzing threats, ensuring seamless intervention during crises.
5. AI to secure AI: As cybersecurity threats increase and evolve, teams can’t track everything on their own. Incorporating AI can help automate processes, analyze anomalous behavior in real time and enrich threat detection and response.
The DeepSeek Attack is a Resounding Alarm for Both the Cybersecurity and AI Communities
Companies, governments and researchers must move beyond reactionary measures and collaborate to build proactive frameworks that adapt as fast as AI-powered adversaries do.
It all starts with machine identity security—the foundation for building trust and resilience in an AI-driven world. And it’s a foundation that becomes even more important as we factor in agentic AI.
The economic, technological and societal stakes are high. But as the saying goes, “An ounce of prevention is worth a pound of cure.”
And for AI, especially AI agents, the brakes need to be as powerful as the engine under the hood.
Kevin Bocek is senior vice president of innovation at Venafi, a CyberArk company.