AI, ChatGPT and Identity Security’s Critical Human Element

April 6, 2023 Andy Thompson

Tech graphic of brain neurons

In 1999, a far-fetched movie about a dystopia run by intelligent machines captured our imaginations (and to this day, remains my favorite film). Twenty-four years later, the line between fact and fiction has all but vanished and the blockbuster hits much differently. Are we entering the Matrix? Are we already in it? Can anyone be sure?

While robot overlords haven’t materialized (yet), modern life is inseparable from artificial intelligence (AI) and machine learning (ML). Advanced technology works behind the scenes when we search Google, unlock our phones with our faces, shop for “recommended items” online or avoid traffic jams with our trusty travel apps. AI/ML’s role in personal and professional life has expanded rapidly in recent years but it wasn’t until ChatGPT arrived in November 2022 that we reached a tipping point.

The New York Times’ Thomas L. Friedman describes the AI chatbot’s impact as “Promethean,” comparing this moment in history to when Dorothy enters the magical Land of Oz and experiences color for the first time in “The Wizard of Oz.” He writes that ChatGPT is “such a departure and advance on what existed before that you can’t just change one thing, you have to change everything.” For better and for worse.

In the Fifth Domain of Cyberspace, AI/ML Benefits Both Sides

My own AI “ah-ha moment” happened at DEFCON 24 back in 2016 as I watched autonomous cyber reasoning systems (CRSs) go head-to-head with each other, finding hidden vulnerabilities in code and deploying patches to fix them without any human assistance. It was clear that AI/ML would fundamentally change the way organizations did cybersecurity. Since then, we’ve experienced game-changing innovations that enable us to analyze massive quantities of data and accelerate response times.

Most important, AI/ML-fueled scalability, speed and continuous self-learning are a boon to resource-strained cybersecurity teams. As 3.4 million global industry jobs remain vacant, many security leaders welcome new opportunities to bridge gaps and amplify efforts. For instance, many are turning to AI-powered tools to simplify cumbersome authentication processes. Adaptive multi-factor authentication and single sign-on methods use behavioral analytics to verify identities based on levels of access, privilege and risk – without slowing users down. And as hybrid and multi-cloud environments continue to grow in complexity, teams are automatically managing permissions for the thousands (or even millions) of identities across their cloud estates with the help of AI.

ChatGPT is another valuable tool in defenders’ toolboxes. According to research from The Wall Street Journal, security teams have charged ChatGPT with creating easy-to-understand communications materials that resonate with business stakeholders and help build program support. Others use it to create policy templates that humans can customize. But most early ChatGPT cybersecurity use cases focus on task automation, from log file analysis and threat trend mapping to vulnerability detection and secure coding support for developers.

While AI continues to evolve, it has limitations, and it cannot bring the cognitive reasoning, nuance and critical first-hand experience that human subject matter experts can. For instance, a University of California, Los Angeles neuroscientist recently asked ChatGPT’s latest version, ChatGPT-4, “What is the third word of this sentence?” The bot’s answer was “third.” Another example: SC Magazine featured a study of 53,000 email users in more than 100 countries, revealing that phishing emails created by professional red teamers drove a 4.2% click rate compared to ChatGPT-created campaigns that lagged at just 2.9%.

In a recent ABC News interview, Sam Altman, CEO of OpenAI (the company that created ChatGPT), urged people to view the chatbot as a supplementary tool rather than a replacement for human experts, saying that “humanity has proven that it can adapt wonderfully to major technological shifts.”

Research on GPT by BlackBerry

Unfortunately, threat actors are also adapting and harnessing AI/ML for many of the same reasons cybersecurity teams are.

Threat researchers have already exposed numerous ways ChatGPT could be used for nefarious purposes. Our own CyberArk Labs team demonstrated how easy it is to create polymorphic malware – sophisticated malware that can evade security protections and make mitigation difficult – utilizing ChatGPT. CyberArk researchers found ways to circumvent built-in content filters (checks designed to prevent abuse and malicious activity) by experimenting with creative prompts. They coaxed ChatGPT into generating (and continuously mutating) code for injection, as well as creating file searching and encryption modules needed to spread ransomware and other malicious payloads. They also discovered that by using ChatGPT’s API with a specific prompt they could bypass all content filters completely.

Fellow researchers at Check Point Research analyzed several underground communities to discover ChatGPT use cases for creating infostealer malware, designing a multi-layered encryption tool (without any prior experience, according to the threat actor’s description) and launching an automated dark web marketplace for illicit goods.

Altman acknowledged the risks that fast-morphing AI/ML technology bring in the previously mentioned interview. “I’m particularly worried that these models could be used for large-scale disinformation,” he said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”

IT decision-makers share Altman’s concerns. According to a 2023 Blackberry Global Research study, 51% believe a successful cyberattack will be credited to ChatGPT within the year. Most concerning to respondents is the chatbot’s ability to aid threat actors in crafting more believable and legitimate-sounding phishing emails (53%). This highlights the need for robust endpoint security that encompasses everything from strong endpoint privilege management to regular cybersecurity awareness training to help end-users spot common phishing and social engineering tricks. Respondents also expressed worry that less-experienced attackers could use AI to improve their knowledge and skills (49%) and about AI spreading disinformation (49%).

AI apprehension continues to mount. In late March, an open letter featuring more than 1,100 prominent signatories called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” until regulators can catch up. Just two days after the letter was published, Italy temporarily banned ChatGPT and is now investigating potential violations of both the EU’s General Data Protection Regulation and the Italian Data Protection Code. In many other countries, lawmakers are sounding the alarm about emerging security and privacy issues. According to NPR, the Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission in late March describing ChatGPT-4 as having the ability to “undertake mass surveillance at scale.”

Identity Security’s Critical Human Element

As public debate and regulatory scrutiny around AI/ML intensify, enterprise cybersecurity teams should stay vigilant without losing sight of the bigger picture. That is, that cyberattacks are inevitable – no matter how, where or why they originate. But damage is not.

Organizations can protect what matters most by securing all identities throughout the cycle of accessing any resource across any infrastructure. Doing so requires a holistic approach that unifies visionary technology and human expertise. The right Identity Security platform must protect critical data and systems against myriad threats to confidentiality, integrity and availability. The right Identity Security partner must be a trusted advisor, elevating security teams and strategies in ways technology cannot. Vision, experience, divergent thinking, technical acumen, empathy, high-touch support, ethical rigor, strong relationships, proven results – humanity in cybersecurity matters.

As AI/ML capabilities rapidly expand, our cybersecurity community must keep testing and pushing the limits of AI, sharing information and advocating for important guardrails. To echo Friedman’s words, only by working together can we “define how we get the best and cushion the worst of AI.”

Andy Thompson is CyberArk Labs’ Research Evangelist

Previous Article
ChatGPT’s Role in the Evolution of Application Development
ChatGPT’s Role in the Evolution of Application Development

When I wrote my first applications in high school, coding was a lot more time-consuming. I didn’t have libr...

Next Article
Quantum Computing Is Coming… Here are 4 Ways to Get Ready
Quantum Computing Is Coming… Here are 4 Ways to Get Ready

Ask a cybersecurity professional what keeps them up at night and you’ll get answers about insufficient staf...