2024 has proved historic for technology and cybersecurity—and we still have some distance from the finish line. We’ve witnessed everything from advancements in artificial intelligence (AI) and large language models (LLMs) to brain-computer interfaces (BCIs) and humanoid robots. Alongside these innovations, new attack vectors like AI model jailbreaking and prompt hacking have emerged. And we also experienced the single largest IT outage the world has ever seen.
These recent events underscore a crucial reality: the future is unpredictable. As long as no (accurate) crystal ball exists to guide security practitioners, we can only speculate on what’s to come based on past experiences. But one thing is certain: 2025 will bring its own set of challenges and opportunities.
With that in mind, here are my predictions for cybersecurity in 2025.
1. The convergence of the physical and digital identity realms will become undeniable in 2025.
By the end of 2025, the distinction between physical and digital identities will have vanished entirely, marking a pivotal shift in how we perceive our identities. Historically, physical and digital identities coexisted separately—physical in tangible contexts and digital in online profiles. This new paradigm demands unified security solutions integrating physical and digital protections into a cohesive framework.
Modern life intertwines IoT devices, smart environments, social media and biometric systems, creating a larger, interconnected identity. A breach in one realm, such as a hacked smart device or compromised online account, immediately impacts the other. Whether it’s a stolen credit card in the physical world or cryptocurrency theft online, the consequences ripple across both. This interlinking highlights the vulnerabilities of fragmented security approaches and underscores the need for unified architectures that protect individuals holistically.
Emerging technologies will drive this transformation. Proven concepts like Zero Trust architecture, decentralized identifiers (DIDs), and AI-driven threat detection will converge in platforms that seamlessly secure hybrid identities. These solutions will eliminate the artificial divide between physical and digital safeguards, treating identity as a singular whole. By integrating protections at all access points, interactions and data exchanges, these systems will enhance resilience against sophisticated cyber and physical threats.
Unifying physical and digital identity security will shift our perspective on protection. We will view protection as preserving a person’s integrity as a whole, not just as a collection of physical and digital components. This transition will reduce risks and build trust in a complex, interconnected world.
The converging of identities is already underway. In this highly interdependent future, safeguarding our integrated selves will require forward-looking solutions and a shift in individual perspectives to ensure personal and societal resilience against evolving threats.
2. In 2025, companies will adopt private AI models for better control and value while managing the identities spawned by generative AI.
Organizations want the benefits of AI, but they don’t want to give up their data. For good reason, too. Leaks of proprietary information, AI hallucinations and the constant assimilation of every prompt into the data model have driven companies to implement smaller, controllable counterparts to big-tech AI. That’s why, in 2025, I predict that organizations will reclaim control over their data by adopting private AI models.
The push for AI integration has led to AI-enabled products that often lack real-world benefits (I’m looking at you, AI-enabled pillows and toilets). Sometimes, a cheeseburger is just a cheeseburger. The push to integrate AI into everything has essentially jumped the shark. Companies now appear to be shifting toward smaller, controllable AI models to mitigate risks like data leaks and AI hallucinations.
The AI industry is currently dominated by major players like OpenAI’s ChatGPT, Microsoft Copilot, Google Gemini and Amazon Q. However, the lack of standardization or default cross-model tools forces organizations to choose a specific AI horse to bet on and hope it evolves favorably.
Recent leaks suggest that OpenAI’s ChatGPT-5—initially slated for a 2024 release, is now confirmed by Sam Altman to face delays, with no release expected in 2025 due to reported ongoing performance issues. This raises investor concerns and signals a potential slowdown in AI development. Consequently, I predict that 2025 will see accelerated interest and adoption of private, in-house AI language models, or “PrivateGPTs,” which offer the promise of better data control, security and operational efficiency.
The launch of platforms like the GPTStore will likely accelerate this trend, enabling companies to build and deploy customized AI models. This shift mirrors the growing “cloud repatriation” movement, where businesses bring critical cloud-based processes in-house for better control, data security, cost-efficiency and regulatory compliance.
By adopting PrivateGPTs, organizations can implement hybrid AI architectures that limit interactions with external, public models—thus reducing the risk of data exposure. This approach ensures that sensitive information remains protected while still taking advantage of AI’s power and flexibility. In 2025, organizations will demand AI for its value, not just for the sake of having AI, leading to more practical and impactful AI deployments.
Additionally, we need to consider the identities spawned from AI models and chatbots and the massive number of machine identities created by generative AI. These new identities will require robust management and security measures to prevent misuse and ensure data integrity.
Simply put, in 2025, we’ll see a move away from novelty-driven AI integration toward sustainable, practical AI deployment, with PrivateGPTs playing a crucial role in this transformation.
3. Autonomous AI-driven cyberattacks could emerge in 2025, enabling individuals to conduct advanced persistent threats (APTs) independently, on par with nation-state-sponsored groups.
I really hope I’m the most wrong I’ve ever been about anything (ever) with this prediction, but unfortunately, the writing appears to be on the wall. Until now, people have largely understood APTs as orchestrations by nation-states or groups of malicious actors collaborating to achieve a shared objective. These threats typically rely on coordinated teams and sophisticated infrastructure for extended, strategic cyber operations. But what if we introduced LLMs into this equation as advisory tools? With access to such advanced AI technology, could a single individual potentially operate as an APT on par with a nation-state-sponsored group?
By programmatically leveraging LLMs, a lone attacker could potentially automate many, if not all, tasks typically requiring an entire hacker collective. The necessity for collaboration could diminish, allowing an individual to conduct large-scale, intricate cyber operations independently. Moreover, nation-state attackers could potentially develop standalone AI/LLM systems designed from the ground up to function as fully autonomous cyberweapons. This scenario is no longer theoretical—it’s a possibility that modern technology is beginning to enable.
Recent advancements suggest that the rise of offensive AI in cyberwarfare is increasingly probable, especially among nation-states with the resources to develop it. In November, Google announced that its project Big Sleep, the company’s advanced AI model for bug detection, autonomously discovered a zero-day vulnerability in SQLite. This breakthrough demonstrates how AI-driven systems can identify complex vulnerabilities faster and more precisely than human analysts, underscoring the potential for AI to conduct sophisticated offensive cyber operations beyond current human capabilities.
Given Google’s demonstrated capabilities, adversarial nation-states are likely already advancing similar systems to gain strategic advantage. With cyberwarfare emerging as the primary tool of modern conflict, we may see these weaponized AI tools deployed in live attacks as soon as 2025.
As countries debate regulatory measures for AI and LLMs, reaching a global consensus remains difficult in today’s polarized geopolitical landscape. Even if some nations commit to restrictions, others may pursue development unchecked. Without an international framework to prevent misuse, 2025 could mark the beginning of an era where AI-driven cyberattacks become more frequent and sophisticated, pushing the boundaries of modern warfare.
The increasing use of offensive AI poses significant security threats, such as generating malicious content, launching sophisticated cyberattacks and manipulating sensitive data. To combat this, more advanced AI systems are needed to detect, counteract and prevent such misuse in real time. These systems can bolster cybersecurity by identifying threats faster than traditional methods and neutralizing attacks before they escalate.
Strengthening AI defenses is essential to safeguarding critical infrastructure, sensitive information and public trust in digital systems.
Preparing for 2025’s Cybersecurity Challenges
Reflecting on the past year, it’s clear that no one could have accurately predicted the extent of advancements we’ve witnessed. From AI achieving performance comparable to top students in the International Mathematics Olympiad to the weaponization of LLMs, the progress has been astounding. As we look ahead, the future remains uncertain and filled with conjecture. However, despite the unknowns, we must continue to navigate this path—2025 will undoubtedly bring new zero-day exploits and attacks that we can’t even name yet.
This uncertainty highlights the need for proactive security measures. While we can’t predict exactly what new protections will be needed, we know that addressing key vulnerabilities will be critical. To combat the evolving threats, we must reinforce our security posture with strategies such as zero standing privileges (ZSP)—ensuring access is granted only when necessary—alongside strong identity security foundations, adaptive multi-factor authentication (MFA), robust privileged access management (PAM), single sign-on (SSO) and improved endpoint management.
These measures will help us stay ahead of emerging threats and better defend against the unknowns that lie ahead.
Len Noe is CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman. His first book, “Human Hacked: My Life and Lessons as the World’s First Augmented Ethical Hacker,” was published in 2024.