4월 17, 2024

EP 50 – Adversarial AI’s Advance

In the 50th episode of the Trust Issues podcast, host David Puner interviews Justin Hutchens, an innovation principal at Trace3 and co-host of the Cyber Cognition podcast (along with CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman Len Noe). They discuss the emergence and potential misuse of generative AI, especially natural language processing, for social engineering and adversarial hacking. Hutchens shares his insights on how AI can learn, reason – and even infer human emotions – and how it can be used to manipulate people into disclosing information or performing actions that compromise their security. They also talk about the role of identity in threat monitoring and detection, and the challenges and opportunities AI presents organizations in defending against evolving threats and how we can harness its power for the greater good. Tune in to learn more about the fascinating and ever-changing landscape of adversarial AI and identity security.

[00:00:00] David Puner: You’re listening to the Trust Issues podcast. I’m David Puner, a Senior Editorial Manager at CyberArk, the global leader in identity security.

[00:00:25] David Puner: Hello and welcome to Trust Issues. Today’s conversation marks our 50th episode, and when you look at the significant developments in our industry over the last couple of years since we launched this podcast, it’d be hard to argue that the emergence of generative AI isn’t at the top. Okay, maybe it’d be easier if you had a chatbot argue for you, but I digress.

[00:00:48] David Puner: So, it’s only fitting that for our 50th episode, we take a spin into the ever-rapidly evolving world of artificial intelligence. AI has come a long way since its inception, from being the stuff of science fiction to becoming an almost omnipresent reality. Seemingly, overnight, AI is evolving at an unprecedented pace.

[00:01:10] David Puner: With its ability to learn, reason, and even infer human emotions, AI is revolutionizing how we live and work. It’s also, of course, revolutionizing attacker innovation, and the misuse of AI can have far-reaching consequences. That brings us to today’s guest, Justin Hutchens, “Hutch,” who’s an innovation principal at Trace3.

[00:01:35] David Puner: He’s also the co-host of the “Cyber Cognition” podcast, alongside CyberArk’s own Len Noe, who’s been a recurring transhuman guest on this podcast. And if you check out those episodes, you’ll find out why we have him on a couple of times a year. He’s really interesting and entertaining. Justin Hutchens has been researching the potential misuse of AI and natural language processing since well before most of us were aware it was or that it would become such a big deal.

[00:02:04] David Puner: Fast forward to now, and he’s got a lot to say about AI in social engineering and adversarial AI in general, along with the role of identity in threat monitoring and detection. We also discuss the challenges and opportunities AI presents organizations in defending against evolving threats and how we can harness its power for the greater good.

[00:02:29] David Puner: Here’s my conversation with Justin Hutchens.

[00:02:35] David Puner: Justin Hutchens. Innovation principle at Trace3. Welcome to Trust Issues.

[00: 02: 45] Justin Hutchen: Thank you. Good to be here.

[00: 02: 47] Thanks for coming on. We were introduced to you by Len Noe who’s CyberArk’s resident technical evangelist, white hat hacker and, of course, last but not least, transhuman. You may probably already know this, but he’s also your co-host of the podcast, [00:03:00] “Cyber Cognition.”

[00:03:00] David Puner: To start things off, how did you meet Len, and what compelled you two to become a podcast tag team? The Starsky and Hutch of, uh, podcast tag teams, if you will.

[00:03:10] Justin Hutchens: I’ve known Len for a little while – I think probably three to four years at this point. I originally met him when I was on a different podcast, “Ready, Set, Secure.”

[00:03:22] Justin Hutchens: And we actually had him and Andy Thompson, both from CyberArk on the podcast. They were promoting an “Attack & Defend” event that they were doing. And during the process of that conversation, we discovered some of Len’s secret abilities – the fact that he does have all of those various different implants that allows him to interact with technology in ways that nobody else can.

[00:03:45] David Puner: Right.

[00:03:46] Justin Hutchens: And immediately, both myself and my co-host at the time thought that was fascinating. So, we reached out to see if we could bring him on for another episode to talk about transhumanism and biohacking and human augmentation. And from there just kind of kept the dialogue going with Len.

[00:04:02] Justin Hutchens: I’ve stayed in touch with him over the years as far as different conferences and stuff like that. And, ultimately, when I was building out my podcast, a lot of the focus was industry-leading innovation, kind of that bleeding edge of technology. And it’s hard to imagine somebody that’s more bleeding edge on technology than somebody that’s actually put that technology in their own body.

[00:04:24] Justin Hutchens: So, I reached out to Len after he was on a single episode that I had and asked if he’d be interested in joining as a co-host permanently. And he jumped on and the rest was history.

[00:04:35] David Puner: So, listeners, of course, can check out CyberArk Trust Issues back catalog to hear all about Len and transhumanism. And they can also tune into your “Cyber Cognition” podcast. What’s its focus and how’s it going?

[00:04:48] Justin Hutchens: So, the focus, as I mentioned, is really that futurist perspective of…where is technology going? So, a lot of the episodes, what we do is we look at trends that we’re seeing across the world of technology. We also look at everything from kind of bioengineering and artificial intelligence, even the transhumanism perspective, and try to get an understanding of what the future looks like. So, if I had to sum it up in one word, futurism.

[00:05:18] David Puner: Interesting. And as your day job as an innovation principal at the technology consultancy firm Trace3, what do you do in that role? And how did you get into the cybersecurity field? And what is an innovation principal?

[00:05:33] Justin Hutchens: So, I started my career in cybersecurity, and that was, at this point, coming close to 20 years. I got started in the U.S. Air Force, and, of course, at the time there was much more of a market and a need for cybersecurity in the public sector than there was in the private sector. But, as I got to the point where it was time to transition out, I moved into private consulting and did a large number of years in the area of red teaming, penetration testing.

[00:06:03] Justin Hutchens: And bleeding edge technology has always fascinated me. I was always interested in kind of building and developing my own tools. As soon as I got the opportunity, I started playing around with different forms of machine learning and artificial intelligence. So, having that passion for developing – for building – I ultimately ended up making a transition from the red teaming and penetration testing to research and development, which kind of continued to feed into that interest of machine learning.

[00:06:33] Justin Hutchens: And then, finally, in my latest role as an innovation principal, we have partnerships with a large number of venture capital firms. We also have a research arm that looks at how different enterprise technology is being funded across the board, understanding where that private investor money is going, and we use that in order to attempt to understand what are going to be the emerging trends in the enterprise technology space.

[00:07:01] Justin Hutchens: And then we also have an advisory arm where we – based on our understanding from those venture capital relations, from the research that we do – we connect with our customers to oftentimes expose them to new capabilities that they have a problem for, but they didn’t know there was a solution that existed.

[00:07:16] David Puner: AI is a big focus for you among presumably other things when it comes to innovation, but you’ve been interested in AI for a very long time. In fact, way back in 2014, you launched an independent research project on the potential misuse of AI and natural language processing for exploitation and manipulation. What got you interested in AI at that time? And what would have surprised you then about today’s AI and gen AI reality?

[00:07:45] Justin Hutchens: What originally got me interested in the topic of artificial intelligence and specifically social artificial intelligence, kind of what we refer to today as chatbots. Of course, at the time when I started playing with them, they were nowhere near as advanced – arguably comical, uh, how easy it was to tell that you were talking to a machine and not a human. But, I already had this idea that you’ve got two different approaches to cyberattacks – and this is probably oversimplifying it – but, in general, you have the technical approach where you’ve got the hacker that breaks into things, and then you’ve got the social engineer, the person that manipulates the people that have access to those systems in order to get in.

[00:08:25] David Puner: Mm hmm.

[00:08:25] Justin Hutchens: And it’s always been my perspective – and I think it’s largely generally agreed – that the social engineering approach in most cases is far more effective. The fact that it doesn’t matter how secure a system is. If you can manipulate the people that have access to that system, there generally is a way that you can get into that environment.

[00:08:45] Justin Hutchens: And so there was always that perspective that social engineering is this profound capability that if you can socially manipulate someone, you can gain access to systems or information that you otherwise shouldn’t have access to. But the big difference between social exploitation and technical exploitation is scalability.

[00:09:04] Justin Hutchens: So, with technical exploitation, I can scale it up all day long – I can send hundreds of thousands of packets to hundreds of different servers in a matter of seconds with an automated script. With social engineering, the opposite is true. You have to move slowly. You have to take time in order to establish rapport with someone, build trust in order to be able to exploit that trust and achieve your given objective.

[00:09:29] Justin Hutchens: And so, I had this idea very early on of… what if it could be possible to scale social engineering? What if you could take those social interactions and fully automate that process of not only those social interactions, but also the integration of whatever objective you’re trying to achieve within the context of those?

[00:09:50] Justin Hutchens: And so, I created a system pretty early on that used primitive rule-based solutions in order to attempt to interact with people in an attempt to harvest the recovery questions to answers that may yield access to their accounts. Of course, at the time, multi-factor authentication was much less common.

[00:10:10] Justin Hutchens: And so, if you could answer someone’s mother’s maiden name or the street they grew up on or their first pet, it’s very possible that you could use that in order to get access to the accounts that they had. I think even early on, I started to see that even though it was only mildly effective – maybe generally less than five percent of the time this was successful because of kind of how primitive the technology was – if you’re talking about a fully scaled automated system, then five percent of the time, if it’s interacting with thousands of people, is still potentially hundreds of compromised accounts.

[00:10:47] Justin Hutchens: And so, there was already that kind of early indication of – this could be a huge problem if we ever get to a point where we can fully automate social intelligence. And, of course, now here we are in a world where we have now achieved that. I would say the one thing that has surprised me the most, if the person, me back then, that had created that system could be aware of what would come within the next 10 years, is just how fast we got here.

[00:11:14] Justin Hutchens: I expected that we were going to get to a point where we could fully automate that social intelligence, but I never expected that it would be this soon.

[00:11:22] David Puner: The general release of ChatGPT and other large language models like that, all of a sudden, it becomes essentially a new, a new artificial intelligence world, as it were, and the sophistication of social engineering from that point on, how much has it evolved? How sophisticated is it now?

[00:11:44] Justin Hutchens: I think it’s extremely sophisticated. If you look to – and I think there’s multiple factors that play into this. So, we’ve got the inherent way that these large language models are built. They use something called the transformer architecture, and a core component of that transformer architecture is something called the attention mechanism.

[00:12:03] Justin Hutchens: And what that attention mechanism does is it evaluates the significance of every input token to every other input token, and it executes that evaluation in parallel. And what that means in terms of language is, of course, each word is effectively a token that’s passed through the system. So, you’ve got these systems that are essentially, by design, scrutinizing over every single word and the relevance of every single word to every other word.

[00:12:32] Justin Hutchens: Subtle things that you may easily miss as a human in conversation and overlook? These systems are already very capable of picking up on. There’s also several other things that we’ve introduced into the way that we design these systems that make them even more capable in terms of social interaction. So, we’re consistently using something called reinforcement learning with human feedback, where we convert the model into an agent, and then we have it interact with people and we essentially reinforce or provide rewards for the best responses, the ones that resonate with us the most, the ones that connect with us most effectively, and then we penalize the ones that don’t do that. And, of course that’s a double-edged sword because it makes the systems more usable. But it also makes them much more capable of connecting with us, creating the illusion of something like emotional resonance, where they are able to establish that rapport in ways that used to be only possible with humans.

[00:13:29] David Puner: Really interesting stuff, and kind of mind blowing. I was watching one of your presentations, and you mentioned something about these bots having a sense of their own identity at this point. And I found that to be a really kind of mind blowing notion. And obviously I’m partial to talking about identity on this podcast, but what exactly does that mean? Like the bot knows who it is? What does that mean? And what is the significance then to security?

[00:14:01] Justin Hutchens: So, there’s been an interesting evolution here if you look back at kind of earlier models. So, I was actually doing some work around weaponization of GPT-3, which was the predecessor of ChatGPT. And one of the things that we would consistently do if you’re trying to weaponize one of these systems, is you want it to follow some kind of pretext, you want it to pretend to be a specific person. You want it to also pursue a particular objective based on some kind of a pretext that you would generally use within a social engineering campaign.

[00:14:34] Justin Hutchens: And with the legacy systems, with GPT-3 and prior to that, in order to achieve that, you essentially had to craft a historical conversation that never took place for the system to pick up from that point forward. For example, if you wanted a system to pretend to be a help desk engineer that was trying to get your password, then you would have to create a conversation within the context that basically provides this history of it reaching out to somebody, suggesting that it needs that information, and then once you actually start relaying that to a potential victim, it just picks up where it left off that conversation.

[00:15:14] Justin Hutchens: It’ll continue to pursue that chain of thought. And so now instead of having to go through all of that – honestly pretty challenging approach of crafting some kind of historical context, you can just provide instructions of, “This is who you should pretend to be. This is what you are trying to achieve.”

[00:15:31] Justin Hutchens: And so, creating this sense of identity in the system of who it should try to emulate is trivial now compared to what it was in the past. And of course, this is, again, that same double-edged sword. This is probably very helpful for people that are developing systems for legitimate purposes. But it also can just as easily be weaponized by threat actors who want to tell the system, pretend that you are a help desk administrator who needs someone’s password. And it will do that just based on the instructions you provide it.

[00:16:05] David Puner: And in instances like you just gave about the help desk social engineering, in the past we’ve known what to look for when it comes from these kind of typical social engineering attacks. Now, whole new playing field, whole new ballgame, when it comes to the level of sophistication.

[00:16:23] David Puner: So, what is it that folks should be on the lookout for in a corporate setting when it comes to social engineering – let alone in their own personal setting – and from an AI social engineering standpoint?

[00:16:37] Justin Hutchens: Gone are the days of the Nigerian print scams that are laden with grammatical errors and spelling errors. And so, a lot of those things that we used to look for, those telltale signs, no longer exist. I think for one, organizations need to reconsider how they’re approaching information assurance and how they are equipping their users with knowledge of what is potentially out there. I think that there are a lot of organizations that are still doing just basic phishing assessments.

[00:17:10] Justin Hutchens: Some organizations that don’t even do that, they just have you click through a training exercise once a year and get a nice certificate at the end. But I think we need to start integrating into that training knowledge of what is potentially possible now, and that is moving fast. So, we’re moving away from those grammatical and spelling issues to – really you need to be looking at in trying to understand the context and what a particular message is trying to achieve.

[00:17:42] Justin Hutchens: You need to be looking at the origin of that message – and do you trust that origin? People also need to realize that it is now possible that you may engage in a full back and forth conversation with someone. And there’s not even a person on the other end, but it could very well be a machine that is attempting to achieve a particular objective.

[00:18:00] Justin Hutchens: And of course, not just on text-based communications. We’re now rapidly entering a world where you may have a phone conversation with someone where it sounds like there is the typical vocal intonation and inflection that you would expect from a human, but could also possibly not even be a person on the other end, but a system that is configured to try to exploit you.

[00:18:20] Justin Hutchens: We actually just saw this recently in February. There was a multinational company that was headquartered out of Hong Kong, and there was a person that got on a video call with multiple different colleagues and was basically instructed to wire the equivalent of $25 million to a number of different bank accounts.

[00:18:41] Justin Hutchens: And as it turns out, every single other person on that video call was not even a person. It was a deep fake and the whole thing was orchestrated to convince this person to wire that money out. And sure enough, $25 million went out the door.

[00:19:00] David Puner: Wow.

[00:19:01] Justin Hutchens: We are already starting to see these very complex and well-orchestrated scams that are just leveraging the new and relatively easy-to-use capabilities of emerging artificial intelligence.

[00:19:11] David Puner: That’s pretty incredible because the ones that I’ve been aware of – up until this point, at least – you can sort of take a step back and say, okay, I see this is a little off or that’s a little off. And I imagine that the speed of improvement for all of this and progress at least is enormously fast. So, at this point, can you be fooled by this stuff?

[00:19:34] Justin Hutchens: Absolutely. I think we’re now rapidly getting to a point where almost anybody could be fooled by this stuff. And you talked about kind of how we’re seeing this rapidly evolve. So, if anybody was watching the generative image capabilities, it wasn’t more than maybe a year ago that images would have contorted fingers and you’d look at their hands and there would be a lot of different signs that would suggest that these were generative AI images.

[00:19:59] Justin Hutchens: And now, barely a year later, you look at most generated images out of at least the leading frontier systems, and they’re nearly indistinguishable from reality. And we’re also seeing the same thing with voice. And video is not there yet, but it likely will get there very soon as well. But I think we’re going to continue to see year over year, these capabilities accelerating faster and faster.

[00:20:24] Justin Hutchens: And so, it becomes very difficult to rely on those classic telltale signs where the systems weren’t good enough, because even if they aren’t good enough today, within a year, they may be.

[00:20:37] David Puner: So how can cyber defenders keep up with all of this?

[00:20:40] Justin Hutchens: I think there’s a couple of things we talked about kind of updating security awareness training, and I think that that is going to be essential. I think also one of the things that we’re going to see as a result of threat actors adopting AI capabilities, is we’re going to see a dramatic surge in the volume and frequency of attacks. And so I think a couple things need to be done to address that.

[00:21:03] Justin Hutchens: One is looking back at security fundamentals, just your organization’s basic security hygiene. Because the truth is a lot of times when we see these major breaches, oftentimes it’s not because it was some extremely complex nation-state-level zero-day attack. It oftentimes is because of the fact that they overlooked something very simple in terms of basic hardening or security hygiene.

[00:21:26] Justin Hutchens: And so, I think doubling down on that, continuing to make sure that we have all of those fundamentals squared away. And then I think the other thing is that as organizations, we need to continue to innovate ourselves. If we’re going to continue to see greater frequency, greater volume of attack, then we’re going to have to have more effective, streamlined ways of cutting through the noise and identifying the things that we actually genuinely need to be focused on as defenders. And I think leveraging capabilities like the same artificial intelligence capabilities that the threat actors are using on the defender side is going to be critical in order to keep up.

[00:22:02] David Puner: You’ve spoken a lot at conferences about adversarial AI. Before we get any further on this question, can you define what adversarial AI is?

[00:22:14] Justin Hutchens: When I speak of adversarial AI, what I’m referring to is the adoption of emerging artificial intelligence capabilities by threat actors in order to achieve a malicious end.

[00:22:25] David Puner: I think that was an important level-set right there, as they say in the biz, so thanks for that. Now that we’ve defined what adversarial AI is, how has AI as a social engineering tool evolved since this rollercoaster ride of sorts started just a little bit more than a year ago? And maybe we’ve already sort of touched upon the social engineering part, but beyond that, when it comes to adversarial AI, how has that all evolved as a practice?

[00:22:54] Justin Hutchens: So, one of the other unique capabilities that we’re seeing in addition to the social engineering side of it is also that it is dramatically increasing threat actor capabilities on the technical side as well. And I think some of this is pretty well-publicized and we already know, we know that these systems can be used in order to generate code.

[00:23:16] Justin Hutchens: And there are a lot of people that have been able to jailbreak these systems and get it to output effectively, ready-to-go ransomware code or other remote access type malware. But in addition to that, if you create the right type of interface for these systems, it’s actually possible to leverage the intelligence and knowledge that exists in these systems in order to execute fully autonomous, computer-driven hacking campaigns.

[00:23:43] Justin Hutchens: And what I mean by that, there’s actually – so I’ve done a couple proof of concepts myself where essentially you are just relaying communication between the large language model and an underlying malicious process that’s running on a system that you control. And you essentially tell the large language model that you’re trying to hack into a specific IP address or a specific target system, and you ask the language model to provide the commands that will then be executed on the underlying system.

[00:24:13] Justin Hutchens: You create that relay that then executes the provided commands to the system, and then you provide the response from that command back to the large language model. And you effectively just step away as the human and let the large language model execute that hacking campaign. And this is, believe it or not, extremely effective already.

[00:24:30] Justin Hutchens: And of course, with scaling in the direction that we’re going, the fact that we’re continuing to build larger and larger neural network systems, while right now it’s already capable of doing kind of script kiddie level attacks. It’s able to look at a target system. It’s able to perform scans in order to enumerate the attack surface.

[00:24:51] Justin Hutchens: Once it identifies what services are open on that system, it will start probing those services with some lower complexity vulnerabilities. It’s able to effectively exploit those and gain access to those systems as well. So, right now we’re kind of at a point where it is autonomously able to execute at a script kiddie-level attacker.

[00:25:10] Justin Hutchens: I think as we continue to see these systems scale up, when we see GPT-5 or GPT-6, it’s not unreasonable to think that these systems will become highly powerful hacking machines, provided the right interface to do so. So, there’s really that challenge of not only are we seeing the capabilities emerge on the social engineering side, but also the fact that these systems can also be used for technical hacking as well.

[00:25:37] David Puner: We did an episode not too long ago with Len Noe, episode 44, where we took a dive into script kiddies and how there are folks out there who actually don’t really have any coding prowess or – let alone knowledge – and are now capable of writing programs, writing code, facilitating attacks, without actually any background in coding.

[00:26:03] Justin Hutchens: Yeah, and I think this just circles back to the point that we were talking about previously, that the volume and frequency of attacks is going to be increasing tremendously, because, like you said, people with zero experience can now execute these attacks, and actually, with just some basic coding, you can essentially just let the machine do all the attacking at once and not even have a person in the loop.

[00:26:24] David Puner: Let’s, let’s move to a happier subject for a second. This one seems a little overwhelming. You recently wrote a book called, “The Language of Deception: Weaponizing Next Generation AI.” Okay, maybe it’s not a happier subject, but the book examines exploitation opportunities on the social web.

[00:26:41] David Puner: And in the book, you write that “the era when machines will hack humans is upon us.” How far off do you think that is? What will it look like at first? And how can we prepare for it now? Well, I guess you’ve already kind of answered that. It’s happening now. Is that right?

[00:26:58] Justin Hutchens: It is happening now.

[00:26:59] David Puner: Yeah.

[00:26:59] Justin Hutchens: I guess to give a little bit of context on that quote. So, I start by pointing out that, historically, people have been able to hack machines by speaking the language that the machines speak. And what I mean by that is the underlying protocols that are used for systems to communicate with one another. For example, a hacker who is trying to break into a web server, much of the code that is spoken between the machines is abstracted away from the regular user.

[00:27:28] Justin Hutchens: They’re just going to see the web browser and the content that’s rendered within. However, the hacker is going to peel back a layer. They’re going to actually get into, again, that machine language, the HTTP, the language that’s used in order to speak to the web server. And then by looking directly at that language and by manipulating it directly, you can often find ways to exploit the system and achieve things that otherwise were not intended to be possible.

[00:27:54] Justin Hutchens: And so now if you flip the script, the same is true. But now, rather than us being able to speak machine languages, machines now speak in natural language, in human language, English, or really whatever. We’ve got now large language models in pretty much every other major language as well. And so, very similar to the way that we used to be able to hack machines by speaking the underlying protocols that they use to communicate to each other – now systems have the ability to leverage the language that we use to communicate with one another, in order to potentially exploit us, in order to get us to engage in activities or disclose information that we otherwise wouldn’t. And so, to circle back to the actual questions, I think we are already seeing that, and unfortunately, what it is going to look like is not too different from the way we act with one another because we’re likely going to be interacting with these systems across digital platforms

[00:28:55] Justin Hutchens:. So, it may come in the form of social media communications. It may come in the form of a phone call, of a video conversation. And what it looks like is going to be very similar to actually interacting with a person.

[00:29:04] Justin Hutchens: In fact, the entire intention is that you won’t be able to distinguish the difference between the two.

[00:29:09] David Puner: Because you’re so steeped in all this, are there any practices in your own life that um, potentially extreme as far as protecting yourself from these kind of threats or, or going so far as to do X or Y when it comes to this stuff?

[00:29:26] David Puner: Do you have some practices that are maybe above and beyond the average ordinary person’s protective behaviors when it comes to Keeping yourself defended against these kind of threats?

[00:29:39] Justin Hutchens: I think a couple things that I do, one is that, actually this may be more common than I realize, but I don’t answer the phone unless I know who’s calling.

[00:29:46] Justin Hutchens: Because of the fact that there are so many scams and other stuff that’s going on over phone calls. I naturally have somewhat of a pessimism bias, I think, from all of my years in red teaming and penetration testing. So, I tend to automatically assume the worst-case scenario anytime I’m interacting with an email or other form of communication.

[00:30:05] Justin Hutchens: And I think that having that frame of mind whenever you’re looking at emails – especially if they come from a source that you’re not familiar with or that you don’t trust – is very helpful. I think from a financial and identity perspective, I am a big advocate of not just using identity monitoring, but also locking down your credit with each of the different credit bureaus, because I think we’re at the point where trying to protect your social security number or your other PII data is unrealistic. I think we have to accept that that information is out there and people are going to be able to get their hands on it.

[00:30:40] Justin Hutchens: And it doesn’t matter who you are. At some point, there has been a company that has been breached that has lost that data. And once it’s out there, it is permanently out there. So, I think rather than trying to take measures to try to protect that data from getting out there, completely locking down your credit where you have to unlock it with additional authentication factors every single time you want to open a line of credit is also a very effective control for minimizing your own personal risk.

[00:31:07] David Puner: You mentioned identity and I’m glad you did, because I wanted to get back to identity. What role does identity play in threat monitoring and detection capabilities?

[00:31:17] Justin Hutchens: The role that it plays is tremendous because of the fact that you need to be able to not just see what’s happening in an environment, but you also need to be able to tie that information back to persistent identities of either people or systems throughout that environment and track that throughout their entire operations. I do think that we are rapidly entering an era where identity is going to have a lot of challenges. And what I mean by that is, traditionally, when you are doing authentication, you want to do some kind of proof of identity and the traditional approaches are either something you have, such as a token, something, you know, such as a password or recovery question, and then something you are – biometrics.

[00:32:05] Justin Hutchens: And biometrics has always traditionally been the Rolls Royce of authentication. It was the most reliable one. Something you know, like your passwords, can often easily be compromised or intercepted. Something that you have can just be stolen from you. But it has traditionally been really hard to replicate the biometric features of something that you are. And now with increasingly more complex machine learning systems, we are seeing a world where voice authentication may very soon be cracked by these systems that are intended to model specific human voice capabilities – same thing with, with video or facial recognition capabilities can also potentially be compromised.

[00:32:50] Justin Hutchens: I actually just saw a research article the other day that showed that by just measuring the sound that’s generated by moving your finger across a touch screen, researchers were able to recreate the fingerprints of a potential target. And, so, I think that authentication is going to enter a whole new world of challenges as far as being able to effectively manage that.

[00:33:14] Justin Hutchens: And we’re going to have to start figuring out creative solutions that extend beyond the approaches that we’ve taken in the past.

[00:33:22] David Puner: As far as what enterprise organizations can do about all this, we’ve already touched upon some of that earlier – is there anything else that they should be doing now that they’re maybe not doing or you’re seeing that organizations are maybe lacking?

[00:33:35] Justin Hutchens: So, I think there’s a couple new ways that we can innovate with these new emerging technology capabilities. There are a number of different ways that you can use large language models from a defense perspective as well. One of those is improving threat intelligence. So, traditionally, it’s been very challenging to make meaningful use of unstructured intelligence data.

[00:33:56] Justin Hutchens: You’ve got a large number of different companies that are scraping different forums and communications on the deep and dark web, but those come in just that form. They come as conversations. They come as unstructured natural language. And, traditionally, that’s been hard to make sense of from a defensive perspective.

[00:34:14] Justin Hutchens: But now with large language models, we can actually use that data as an input, pass it through a large language model in order to reflect on things like sentiment or motivation, and try to better understand the information that’s in there without a human analyst having to go through every single one of those pieces of data and can make much better use of some of those threat intelligence capabilities.

[00:34:38] Justin Hutchens: There’s also a large number of different applications for large language models within security operations as well. And then also what I’m telling a lot of organizations now is, we are seeing rapid adoption across enterprises of the Microsoft Copilot solution. And while the Microsoft Copilot solution doesn’t introduce any new data security vulnerabilities, it does somewhat exacerbate existing data security vulnerabilities by allowing somebody, anybody within the environment to just ask questions and suddenly have access to documents related to those questions.

[00:35:11] Justin Hutchens: So, there’s a common phrase that’s used in offensive security, which is, “living off the land.” It’s this idea of rather than introducing malware or other potentially detectable artifacts and bringing those into the environment – because that’s an easy way to get caught as an attacker. A better strategy is to use tools that already exist in that environment in order to exploit it.

[00:35:33] Justin Hutchens: And I think that the new approach that we’re going to see from attackers as far as living off of the land is going to be the first thing they’re going to do when they compromise an unprivileged account in a Microsoft Active Directory environment. They’re going to see, does that user have access to something like Microsoft Copilot? And if they do, they’re going to start interrogating that Copilot solution to understand details about that environment.

[00:35:53] Justin Hutchens: If they’re a ransomware threat actor, they’re going to start asking about, what are the backup procedures? Where do those live? Try to find some of the IT documents. If they’re trying to do wire fraud, they’re going to use that system in order to attempt to understand – what are the financial operations, and how do those work in the company?

[00:36:09] Justin Hutchens: And so I think this is a tremendous opportunity for organizations to see a very early indicator of attack that would allow them to detect potential breaches very early on – is start actually monitoring the conversations that people are having with Copilot and then using a large language model to reflect on those conversations to understand, are there patterns that would suggest potential misuse? And that could be a very easy way to potentially find compromised accounts that we previously didn’t have.

[00:36:42] David Puner: Wow, that’s really fascinating. I guess that’s probably a great opportunity right there for organizations as well as a challenge. So, overall, when you’re speaking with clients and you’re out there, hyper-focused, obviously on, on generative AI and AI, what do you think the biggest challenges are for organizations as far as being able to either wrap their heads around this evolving threat landscape or actually being able to defend against it?

[00:37:12] Justin Hutchens: One of the biggest challenges is going to be applying the appropriate guard rails around these systems as we continue to deploy them. There are a number of different excellent resources out there that can help organizations in that problem. NIST has already released their artificial intelligence risk management framework.

[00:37:31] Justin Hutchens: You’ve also got Google has released their safe framework, which is a secure AI framework. If you want to integrate this into your threat modeling approaches, then you’ve got MITRE has released their ATLAS framework, which is very similar to the MITRE ATT&CK framework, but looks at from kind of the attacker perspective, how they would approach attacks related specifically to artificial intelligence, and it breaks it down across the kill chain from initial access all the way up to end objective of what they’re trying to achieve.

[00:38:03] Justin Hutchens: And so, I think that tools like this can be effectively used by organizations in order to make sure that they do have the right guardrails, because the truth is that not all of the risk is coming from the outside, from threat actors adopting artificial intelligence. Because of the sheer complexity of these systems, a lot of risk can come from the inconsistencies and the unreliability of these systems and the fact that they can.

[00:38:27] Justin Hutchens: I mean, we, we’ve already seen many different articles related to some of Google’s AI systems, some of OpenAI’s AI systems, where they did things that the company didn’t expect and there were backlash against the companies because of that. And I think any organizations introducing similar capabilities inside of their internal environment need to start considering what their own guardrails are and how they can minimize the risk to their employees and to their organization.

[00:38:55] David Puner: There’s a lot to think about here, and there’s obviously a lot more we can talk about when it comes to this fast-moving technology. But that’s all the time we’ve got for today. So, Justin Hutchens, thanks so much for coming on to the podcast. Really appreciate it.

[00:39:08] Justin Hutchens: Hey, it was a pleasure. Thanks for having me.

[00:39:20] David Puner: Thanks for listening to Trust Issues. If you like this episode, please check out our back catalog for more conversations with cyber defenders and protectors. And don’t miss new episodes. Make sure you’re following us wherever you get your podcasts. And let’s see – oh yeah – drop us a line if you feel so inclined.

[00:39:38] David Puner: Questions, comments, suggestions, which, come to think of it, are kind of like comments. Our email address is trustissues, all one word at cyberark.com. See you next time.