July 6, 2023
EP 31 – How Generative AI is Reshaping Cyber Threats
While generative AI offers powerful tools for cyber defenders, it’s also enabled cyber attackers to innovate and up the ante when it comes to threats such as malware, vulnerability exploitation and deep fake phishing. All this and we’re still just in the early days of the technology. In this episode, CyberArk Labs’ Vice President of Cyber Research Lavi Lazarovitz, discusses with host David Puner the seismic shift generative AI is starting to bring to the threat landscape – diving deep into offensive AI attack scenarios and the implications for cyber defenders.
There’s been lots of heavy breathing over generative AI, and for good reason. The landscape we were familiar with just a few months ago has seemingly been shaken. Wherever this all may be heading, it seems like there’s no turning back. Pick a landscape or a field or whatever. It’s likely being upended or has the potential to be upended by the AI technology.
[00:00:39.710] – David Puner
One particular space that’s experiencing a seismic shift is, of course, cybersecurity. Generative AI to cybersecurity is akin to something maybe like microwave energy to frozen food. I’m speaking metaphorically here rather than scientifically. I don’t really know. Maybe that lands, maybe it doesn’t. Let’s call it supercharged scalability.
[00:01:02.710] – David Puner
On the cybersecurity front, we’re seeing some of generative AI’s potential for both defenders and attackers. While ChatGPT and other tools can be powerful forces for good, they’ve also unleashed a tsunami of attacker innovation from creating malware to pinpointing and easily exploiting hidden vulnerabilities to cloning voices for deep fake phishing attacks. In fact, CyberArk’s latest research found that 93% of security professionals expect AI-enabled threats to affect their organization in 2023.
[00:01:40.210] – David Puner
It’s true that attackers are constantly innovating, yet compromising identities remains the most effective way to circumvent cyber defenses and access sensitive data and assets, making this yet another compelling reason why organizations need to better protect identities.
[00:01:59.030] – David Puner
As you’ll hear today’s guest explain it, generative AI is changing the threat landscape, and it’s learning and improving fast. Our guest today is Lavi Lazarovitz who’s vice president of Cyber Research at CyberArk Labs. That could be an AI-enhanced way of saying he heads up our cyber research team.
[00:02:18.190] – David Puner
The team itself is comprised of an elite group of white hat hackers, intelligence experts, and cybersecurity practitioners. I talk with Lavi about generative AI and various offensive AI attack scenarios and how it’s impacting the overall threat landscape and enabling attacker innovation and what it all means for cyber defenders and protectors.
[00:02:40.390] – David Puner
Let’s get to it before generative AI releases an alternate version of this episode. That one land. Here’s my conversation with Lavi Lazarovitz.
[00:02:53.190] – David Puner
Welcome back to Trust Issues, Lavi. I recently heard this question on another podcast and thought it was interestingly phrased, if a bit awkward. Where does this podcast find you today?
[00:03:05.930] – Lavi Lazarovitz
That’s a good one. I’m currently at the cyber headquarters in Israel at the fifth floor where we have CyberArk labs. After a long day, we discussed a few of our recent research projects with the team. We have done a few retros about our security research that we’ve done, what we learned from them. I had lunch very late because of these meetings. This is where I am at now.
[00:03:36.360] – David Puner
Well, as we record this podcast, we’re somewhat fresh off our own CyberArk IMPACT 23 conference in Boston. One of the highlights, without a doubt, was your keynote on offensive AI, which is what we’re here to talk about today. Surprise. I don’t know if you knew that or not.
[00:03:53.360] – David Puner
Before we dive into that though, we last spoke to you here about a year ago last July, in fact, episode 7 when we didn’t know what we were doing and we talked about cyber attack cycle deconstruction. As a refresher, you head up our CyberArk Labs team in Israel. What is CyberArk Labs, and what’s your role?
[00:04:18.810] – Lavi Lazarovitz
CyberArk Labs is a unit within CyberArk, and our main mission is to represent the attacker for CyberArk. We represent the attacker for our product teams when we are building a new security line of defenses and features. We represent the attacker perspective when it comes to thought leadership. Try to foresee how threats will evolve. Try to foresee how threats will materialize in the near future and take that and try to think about the next type of defense or things we can do within our CyberArk product to protect organizations better, to protect identities better.
[00:05:04.100] – Lavi Lazarovitz
This is our mission statement, our whole focus within the research side. I have actually a few roles in that as ahead of the research team. The first role is to direct the research into the right places. Focus on whether a new technologies or new authentication protocols is to make sure that we are on the right track in terms of research. My second role is a bit of what we are doing here is storytelling. Bridging the gap between the most technical research that we’re doing into how that connects to what we do every day. Try to tell the story not only within CyberArk but externally, with the security community. Storytelling has a significant part in my world.
[00:05:50.620] – David Puner
[00:05:51.100] – Lavi Lazarovitz
The third one is the objectives. Try to manage and make sure that we are hitting the impact. This is how we call it within CyberArk, the impact that we are creating base on our research.
[00:06:03.190] – David Puner
It’s a very cool job. You play the attacker. Your team essentially plays the attacker. I’ve also heard you say the unofficial mission is to break things. What does that mean in the context?
[00:06:14.070] – Lavi Lazarovitz
I think that one of the things that we are trying to do when new emerging technology comes, like biometric authentication a few years ago or containers also a few years back, is try to map the threat, the attack surface. You do that just with trying to break it, escape the container, try to bypass authentication, break the protocol. This is how we learn a lot about the technology in the related attack surfaces.
[00:06:44.610] – David Puner
Very cool. Needless to say, it’s like a 20-hour-a-week job and very relaxing.
[00:06:55.830] – Lavi Lazarovitz
It’s obviously completely the opposite. To be completely honest, in many cases it’s frustrating a lot because you’re going down rabbit holes. You know that you have something but you can’t exploit it, and it takes so much time. The research process isn’t something that you see immediate value of.
[00:07:17.360] – David Puner
Well, you guys you do really interesting work. As an aside, which I highly doubt will make the cut here, but who knows, it could be gold. You mentioned rabbit holes, going down rabbit holes. I actually, in the last couple of days, for the first time, actually really thought about what a rabbit hole is, because right outside my window over there, I have a very dumb fat rabbit that refuses to go away.
[00:07:45.490] – David Puner
I checked out its hole and I was really amazed at how small this thing is and how fat this bunny is and how it actually gets into the hole. It’s going to be really bummed out. We planted grass like about five, six weeks ago. As soon as I open up the barrier, which is going to be tomorrow, my dog is probably going to be really telling that bunny to find another home. Big aside.
[00:08:15.550] – Lavi Lazarovitz
Good luck with that rabbit hole then.
[00:08:17.550] – David Puner
Thank you very much. It is the dumbest, fattest thing I’ve ever seen.
[00:08:23.250] – David Puner
Now, moving back to the main course, AI. It probably goes without saying, but there’s pretty much non-stop talk about AI these days. We’ve talked about it quite a bit on this podcast, too. We first talked about it in an episode earlier this year when we had a guy from your team on the podcast, Eran Shimony, who successfully got ChatGPT to create polymorphic malware. Something I’ve been curious about is why is there so much buzz about AI now. Generative AI has been around for several years. What’s changed?
[00:08:56.580] – Lavi Lazarovitz
The short answer for that is simply scale. You’re absolutely right to say that those models have been around for quite a while, but what happened just recently is just the number of parameters. Those models can digest to create a very accurate and comprehensive answer to our prompt. This is what happened.
[00:09:23.090] – Lavi Lazarovitz
Just to give you a sense, and I also discussed at my recent IMPACT keynote, is that GPT-2 was based on 1.5 billion parameters. GPT-3 that came out 2 years later was based on 175 billion parameters. Those numbers are just remarkable. This very steep curve in the volume here, in the number of parameters as an example, made the change and the quality came with it. This is what we see today.
[00:10:01.960] – Lavi Lazarovitz
I should also mention here that it’s not only about the numbers here. There’s a lot behind those models and numbers. A lot of tricks and new, brilliant, innovative ideas as to how to process this data, how to tag it, the algorithms behind it. It’s a lot of work that is hiding behind this number. The bottom line here is scale.
[00:10:28.810] – David Puner
Have you been keeping a watchful eye on AI for the last number of years? Or is this something that’s flown into your radar in the last six months, like with everybody else collectively?
[00:10:41.240] – Lavi Lazarovitz
Well, I’ll tell you this I think it was 2 years ago, we started looking into biometric authentication and more specifically face recognition. He looked into Windows Hello, how it is implemented, how you could potentially bypass it. We also released research about that on how an attacker with physical access to your machine, the victim’s machine, can connect a device and then have this device inject images or the processed images that you got from Facebook, for example, and inject those into the authentication process and in this way get access to the machine.
[00:11:21.670] – Lavi Lazarovitz
One of the things that we had to handle back then is the fact that biometric authentication or more specifically Windows Hello requires the infrared image of the victim to get authenticated. It processes the infrared image.
[00:11:38.260] – Lavi Lazarovitz
Now, how you can get access to infrared image of your victim? One way is to take a picture of the image with an IR camera. The other way is download an image from Facebook and then use AI model to transform this image to look like an IR image. We played just a little bit and we tested it and it had a lot of issues. We got an IR image, but it wasn’t quite right. I think that was about 2 years ago was the last major research that we’ve done that had AI aspect in it.
[00:12:18.560] – David Puner
How has this supercharged scalability of generative AI changed the way that threat actors operate and what are they targeting?
[00:12:28.370] – Lavi Lazarovitz
We understood several things looking into the recent generative AI trend and our analysis. We understand that generative AI nowadays it is very accessible, accurate and is getting better. They can be using it for bypassing authentication to create a more trusted or more believable phishing campaigns.
[00:12:54.830] – Lavi Lazarovitz
We tend to think that the threat actors will use generative AI in the early stages of the metrics that lists the different attack techniques across the attack chain. We think that generative AI will be used there. We have seen already evidence of such activity, but it’s not something that is common. We expect it to be very common of threat actor using generative AI to generate malware, to bypass authentication.
[00:13:28.170] – David Puner
I guess that is a great segue into thinking about these common attack tactics and sequences. At what stage is AI making the biggest impact?
[00:13:41.030] – Lavi Lazarovitz
Based on our analysis, what we have seen already and what we expect to see in the near future is generative AI having an impact on the first stage in the attack metrics, which is reconnaissance. Generative AI, maybe I can say here more specifically, AI technology and machine learning technology and as it becomes better, will make it easier for both, red teams, blue teams, threat actors, legitimate actors to find vulnerability patterns. Those models become better in finding patterns and variety of vulnerability patterns will be something that will happen more effectively. We expect that to have an impact on how well we can find vulnerabilities again from the blue side and the red side.
[00:14:37.530] – Lavi Lazarovitz
Another tactic that we can expect to be influenced by that is resource development and malware development. Generative AI has been used, even ChatGPT, which is not a model that was intended to be used to create code, but it can. We’ve seen this model used to create code and even malware. We also experimented within the CyberArk Labs to create malware, to create polymorphic malware. We’ll talk about that soon. Malware development is definitely something we should expect being influenced by generative AI.
[00:15:16.620] – Lavi Lazarovitz
The last attack technique or tactic that I think will see significant influence by generative AI is initial access. Initial access is a stage where threat actors get an initial foothold on an endpoint or a server. One of the most common attack techniques is listed under initial access, which is phishing. And generative AI opens a new world of vishing, for example, the voiced version of phishing, sending messages that are voice-based. Generative AI now can be used to deep fake other voices. It opens a world that we can expect to see more in the wild threat landscape.
[00:16:03.020] – David Puner
At our IMPACT conference in Boston, you had an interesting part of your presentation where you threw up an example of a phishing attack in the form of a message from our chairman Udi Mokady. It was still a little bit primitive, but probably not far off from being completely believable. How far away are we from that?
[00:16:29.960] – Lavi Lazarovitz
To be honest, I think that if I would get such a message in the right context, it will be very difficult for me to differentiate or feel there is something different here and it might be not a genuine voice. Such an attack requires a lot of context and a lot of preparation. Because sending just a message out of the blue from an unknown number, from someone that usually doesn’t send you voice messages, immediately raises alarms. The attack that we demonstrated was tailor-made and we had the background information.
[00:17:08.150] – Lavi Lazarovitz
Back at the IMPACT, I generated Udi’s voice, saying that he needs the recent research report and he will see me soon at IMPACT. There was a lot of trust-creating context that might not be available upfront for all threat actors. I think that the voice aspect, the technical aspect is, at least to me, to my maybe not-so-good year, it’s sufficient. The context and the overhead that creates the context for the attack is not something that trivial to create. We can definitely expect such campaign to become more common or to become available for threat actors.
[00:17:57.730] – Lavi Lazarovitz
Because there is a dark web with tons of PII information that can be used to create that context. We share a lot of information on social media. It can be used to create that to create that context. I’m pretty sure that soon we’ll see that these elements come together to create mess campaigns or even target targeted campaigns that rely on that type of phishing attack.
[00:18:25.330] – David Puner
That’s obviously pretty scary considering those of us who are in this field and folks who are just generally aware of cyber hygiene, I think have gotten decent in being able to spot like a phishing email for example. You look for grammatical errors, you check the email address, et cetera. If you’re actually getting a phishing attack and it’s believable. The voice sounds right and it’s from your executive chairman, and the executive chairman is asking for the TPS reports immediately to cite the movie Office Space.
[00:19:04.410] – David Puner
How is somebody going to be able to differentiate between something that’s real and not? How are we going to get to the point where… Are we going to have to second-guess everything that we potentially receive?
[00:19:15.010] – Lavi Lazarovitz
To be honest, it’s going to be tough. Eventually, many machine learning and AI experts predict that it will be practically impossible to differentiate between AI-generated audio, images, text, and so on from human-created ones. I think that the key here will be like we are doing today with the phishing emails, is to look at the indications around it; from what number you’re getting the message, if the channel used, whether it’s WhatsApp or Teams or Slack or so on is the one usually used. The tone, maybe even the phrasing or the wording of the message. We have to look at the indication around that.
[00:20:05.410] – Lavi Lazarovitz
I have to say that it seems that it won’t be a balanced equation here looking at it from the AI perspective. AI generative malicious activity and AI from the defensive side, it might not be the case. It might be the case that the defensive side will use other means to defeat or to mitigate or defend against this type of attacks. I can say that below all of that, that our identity is the one that will need to be protected better. The new line of defenses that organization will need to create should focus on the identity perspective here.
[00:20:46.180] – Lavi Lazarovitz
Because eventually looking at the attack metrics and considering all those developments from the threat landscape, eventually those lead to the identities that allow threat actors access to our bank account, to our corporate applications or assets, and so on.
[00:21:04.970] – David Puner
Look at that, it all ties back to identity.
[00:21:07.850] – Lavi Lazarovitz
[00:21:08.410] – David Puner
Which makes it pretty apropos that we’re discussing this topic right now. As a related question, are there areas AI can’t really help attackers with?
[00:21:21.820] – Lavi Lazarovitz
Those three early attack stages that include reconnaissance, malware development, and initial access are the one we can expect to see influenced by generative AI. We, at CyberArk Labs, we also looked at a polymorphic malware, and maybe we can talk about a little bit now, that might have an influence on a later stage. The defense evasion tactic that appears later on in the attack metrics.
[00:21:49.410] – Lavi Lazarovitz
The polymorphic malware is eventually a malware that mutates its code without actually mutating the functionality. Up until recently, polymorphic malware was considered polymorphic if it had a different type of encryption or it used different encryption keys to encrypt the malware making it a little bit different for a security agent, which made it more difficult to identify the specific malware. Now, generative AI allows threat actors to create malware that actually mutates. The code looks differently. The implication of it is on detection, on mitigation of those malware. You identify the malware, you know it’s malicious, you stop it.
[00:22:34.160] – Lavi Lazarovitz
Now, if it changes all the time, it makes it very flexible and allows it to evade detection. This is another stage that we’ve looked into in CyberArk Labs with the concept of polymorphic malware and how it can actually be implemented.
[00:22:52.010] – David Puner
When thinking about all these things that are going on with AI, what are the risks organizations and what can be done to get out in front of that development from a defensive standpoint?
[00:23:03.050] – Lavi Lazarovitz
I think there are a couple of things that we can say about the defensive perspective. The first thing is that the tactics that are related to escalation of privileges or lateral movement still rely on this brick which is identities. Threat actors, and we see it in all the recent breach reports and even malware operations, rely on those credentials and the permissions to get access to other servers or to the asset. Creating a comprehensive line of defense around it is still a critical line of defense to mitigate the evolving third-line segment that we see out there that will be influenced more and more by generative AI.
[00:23:53.310] – Lavi Lazarovitz
The other aspect that is super interesting to consider is being malware agnostic or the malware agnostic approach defenses are critical here. Because we see that, for example, generative AI can create polymorphic malware that will be more flexible. They will be changing a lot and adapting to the environment to better evade detection. Being malware agnostic is a critical element considering the evolving threat landscape and the influence of generative AI. I think those two main aspects, the Identity Security and malware agnostic aspect are critical for effective line of defense.
[00:24:33.890] – David Puner
What can organizations do now to, I don’t even know if it’s get out in front of this or not because it’s here, but what can organizations do now?
[00:24:42.000] – Lavi Lazarovitz
The first thing to do is consider what Identity Security controls organizations currently have and make sure that those are reinforced with preventive controls and monitoring controls. On one hand, you have, for example, conditional access, when someone whether an employee or a threat actor attempts to access sensitive asset. If that is not how the organization usually works and you pop up an MFA or you restrict access from a specific set of IPS within the organization, that will already put the organization in a better place in terms of handling the evolving threat landscape.
[00:25:28.180] – Lavi Lazarovitz
The other aspect of it is monitoring and reacting, and being able to identify whether human or machines are doing within the organization is critical to being able to identify when malicious activity is taking place. To your question, I would start with these two points as a baseline for future developments on the defensive perspective and building an effective line of defense that is also effective against this new generative AI threat landscape.
[00:26:01.020] – David Puner
There are tools out there where you can cut and paste a piece of text into the tool and it will give you a readout of how much it’s been generated by ChatGPT for example. Don’t know how accurate they are or not, but it’s definitely somewhat eye opening and interesting. Could we be at some point looking at some sort of a tool or solution that’s running in the background on systems where it will automatically check for AI-generated attacks essentially?
[00:26:34.760] – Lavi Lazarovitz
If the input would be a certain type of code or activity, a bunch of logs, for example, it could work. I think at least it could work in the near future. In the near future where AI will be good, but not good enough to evade AI-based detections.
[00:26:56.310] – Lavi Lazarovitz
What I’m trying to get at is that maybe nowadays generative AI will create a straightforward, naive code that will be more trivial to detect with other AI tools. If you look forward, and I think that we’re talking about 2 years from now, 3 years from now, it will become less and less effective. This is the generative AI part and the malware aspect.
[00:27:19.170] – Lavi Lazarovitz
If you’re talking about logs and trying to identify malicious activity within logs and letting generative AI try to classify a set of logs, whether they indicate a legitimate activity or malicious activity, I tend to think that AI will definitely have an impact there and it will definitely help defenders identify malicious activity. To be honest, there is a question mark here, at least to me, because I remember the last machine learning hype that happened about three, four, five years ago, maybe more. There was a huge hype around machine learning and how it would be able to identify malicious activity and be used in the threat detection technologies. The reality was much more complicated.
[00:28:06.250] – Lavi Lazarovitz
The amount of positive just grew a lot. The technology changed so much. We are seeing it from the identity perspective. We see so much services and applications out there, containers, serverless functions, and so on. That looks very differently when you’re looking at code when you’re looking at the logs. I’m sure that AI and the development in that area will create innovation around that, around detection of malicious activity. I do have a question mark there as to how fast and maybe if it will be effective in really creating a good signal-to-noise ratio where you can really rely on it to detect malicious activity.
[00:28:51.110] – David Puner
In your IMPACT keynote, you talked about a study by threat researchers from Tel Aviv University that used generative AI to bypass facial recognition. How did they do that? What has CyberArk Labs done related to AI and biometric authentication? I think you already hinted a little bit about that.
[00:29:08.740] – Lavi Lazarovitz
Yeah, so they’ve done. The researchers in Tel Aviv University had a brilliant idea. I loved it. They asked themselves whether it is possible to use generative AI model to create a master face that can be used to authenticate against all faces out there.
[00:29:33.030] – David Puner
Like one face that can unlock anything?
[00:29:34.890] – Lavi Lazarovitz
[00:29:35.930] – David Puner
[00:29:36.650] – Lavi Lazarovitz
That was their question. They researched this, tried to answer this question. They came out with a generative AI model called GANS, Generative Adversarial Networks. It’s a bit different than what we know with Midjourney model, for example, that takes text and turns it into an image. The GANS model actually creates a vector, a set of characteristics of an image, and then tweaks it just a bit to adjust the image. Then they compare the image against a database of image vectors, and they learned how close this vector was to other image vectors. They done this iterative process until eventually they got a set of nine images, so not one, but nine images that matched more than 60% of images they had in their database.
[00:30:40.990] – Lavi Lazarovitz
When I read it, I thought that this is remarkable because this actually means, although this is a theoretical research, this means that this master key, master face attack vector is viable. You can do it. As I mentioned earlier, the graph scale is super steep. The models become better and better. Probably we’re able to create in the near future, images will match more than 60%, 70%, 80% of the images out there.
[00:31:09.670] – David Puner
So those nine faces could unlock 60% of anything requiring some sort of facial recognition?
[00:31:16.540] – Lavi Lazarovitz
[00:31:17.970] – David Puner
That’s a huge problem.
[00:31:19.270] – Lavi Lazarovitz
Exactly. That’s incredible. We, in CyberArk Labs, we thought, okay, how this actually can be used by threat actors, how it would look like. We say, okay, here we are taking the set of images that we can create using this GANS model. We embed that on our device that we connect to our victim machines and use our device to shoot those image vectors to Windows Hello. We actually were able to show, and I demonstrated this in recent IMPACT event, how we are just getting access to our victims machine using it.
[00:31:57.580] – Lavi Lazarovitz
I do expect a development or evolution on both sides. Models become better, and those models will create better images. On the other hand, I know that there’s a lot of work done on improving face recognition and biometric authentication. Those two curves go up, there will be friction.
[00:32:18.410] – David Puner
I should say that if folks want to see this in action and read more about all this, they can check out your blog on the CyberArk blog post called Analyzing 3 Offensive AI Attack Scenarios. You successfully do that, now, what? What do you do with that?
[00:32:37.590] – Lavi Lazarovitz
After you have those images in hand and you can use those to bypass authentication and get access to a victim device, this is a game over. Now, we are talking about a specific attacker scenario where the attacker has access to our victim machine, physical access. As soon as we get access, this is a game over.
[00:32:58.520] – Lavi Lazarovitz
One thing that will also become available soon is, for example, RDP remote desktop connections that use Windows Hello and face recognition authentication, for example. With that enabled, victims or a server that allows remote desktop using physical authentication can be accessed now by threat actors with this capability of bypassing physical condition authentication with those faces, those master faces, and their ID by scenario.
[00:33:33.320] – David Puner
Really fascinating stuff. Let’s turn things around now. Can you share some examples of ways AI is helping the good guys combat AI-based threats?
[00:33:44.070] – Lavi Lazarovitz
One thing that we started seeing is generative AI used, for example, to rely on the intelligence of many organizations out there to create a tailor-made policy for every organization. Just imagine this, where you have your security controls, whether those are not CyberArk, you have your default policy, you get it out of the box. Now you work hours, days, weeks, months to tweak that policy to fit to your needs, to the application that you run within your organization, to how you use those applications, to the permissions that you need your users, business users or admins to have. That’s a lot of investment.
[00:34:31.180] – Lavi Lazarovitz
Now, generative AI brings the opportunity to tailor-made the default policy to you based on many policies out there and many tweaks and adjustments that other organization has done to that policy to fit to their needs. Now, you can use generative AI to just create that policy for you. I think that it is just the tip of the iceberg in terms of the opportunities generative AI brings to defenders. Adapting security policies which I know the security teams invest a lot in, might become much more easy to do.
[00:35:11.490] – David Puner
Technology is constantly changing and attackers are constantly innovating. How do cybersecurity practices and tools need to evolve to keep pace?
[00:35:21.110] – Lavi Lazarovitz
I touched on it a little earlier in our discussion when we talked about reconnaissance and how AI, not necessarily generative, but AI can be used to identify vulnerability patterns. For security researchers, for security practitioners, this is also an opportunity, a big opportunity because those models can be used to identify hundreds and maybe thousands of security issues well before they are deployed in the wild production and raising the bar a lot for threat actors that also have access to those AI tools.
[00:36:04.430] – Lavi Lazarovitz
One thing that I’m sure and I know that in CyberArk Labs we are also looking into is boosting our tools with AI capabilities that will push our return on investment in terms of times versus issues, vulnerabilities that we find. This is just one example that we are busy with.
[00:36:29.720] – David Puner
Interesting. Thank you for that. To wrap things up here, when we talk to you here again a year from now, and I hope we talk to you sooner than that, but let’s say it’s a year from now, what do you think we’ll be talking about when it comes to generative AI? If you were to look back to this time last year, what would surprise you most about the popular or public emergence of this technology or boom really?
[00:36:56.150] – Lavi Lazarovitz
Yeah. Okay, so I think that a year from now we’ll talk about the explosive innovation that happened during that last year. It will be in the security realm for threat actors and defenders and being in other realms and domains as well. We see it now, and I’m sure we’ll keep seeing it. As to what would be a surprise for me, I think that a year from now, if we won’t hear about how AI was leveraged in at least one or two security incidents or breaches, it would definitely surprise me. I think that it is coming and it’s coming fast. I’d be surprised if you won’t hear about that.
[00:37:41.730] – David Puner
How is AI changing the threat landscape and driving attacker innovation?
[00:37:46.410] – Lavi Lazarovitz
Generative AI is changing the threat landscape and driving innovation by allowing threat actors to be more effective when it comes to finding vulnerabilities, generating malware, and bypassing authentication by, for example, allowing threat actors to create a vishing campaign, a voice version of phishing, to create a sense of trust in their victim.
[00:38:16.100] – Lavi Lazarovitz
For example, at IMPACT Boston, on stage, I demonstrated how Udi Mokady’s voice can be deepfaked, recreated, asking me to share sensitive recent research reports. That create a sense of trust, and this just a small example of how generative AI can be used to create a new domains or new channels and get a sense of trust in their victims. A new or renewed attack vector based on generative AI.
[00:38:50.090] – David Puner
Lavi Lazarovitz, the real Lavi Lazarovitz, thanks so much for coming back on the trust issues. It’s been really eye-opening and interesting as always.
[00:39:00.590] – Lavi Lazarovitz
It’s always a pleasure, David. Thank you.
[00:39:12.110] – David Puner
Thanks for listening to Trust Issues. If you like this episode, please check out our back catalog for more conversations with cyber defenders and protectors. Don’t miss new episodes. Make sure you’re following us wherever you get your podcasts.
[00:39:26.540] – David Puner
Let’s see, drop us a line if you feel so inclined, questions, comments, suggestions, which come to think of it, are kind of like comments. Our email address is trustissues, all one word, @cyberark.com. See you next time.