27 12 月, 2023

EP 42 – Year in Review 2023: Unleashing AI, Securing Identities

In this year-end Trust Issues podcast episode, host David Puner takes listeners on a retrospective jaunt through some of the show’s 2023 highlights. The episode features insightful snippets from various cybersecurity experts and thought leaders, each discussing crucial aspects of the ever-evolving cyber landscape. From discussions on the dynamic nature of threat actors and the need for agile security approaches to insights on identity security challenges in the cloud and the intricacies of safeguarding data, the episode encapsulates a wealth of knowledge shared by industry professionals. With diverse perspectives on generative AI, risk management, cloud security, DevSecOps – and even a personal bear wrestling story – Trust Issues’ 2023 cannon delivers an engaging compilation for both cybersecurity enthusiasts and industry practitioners.


As the podcast looks back on the year’s diverse lineup of guests, it serves as a valuable resource for anyone seeking to stay informed about the latest cybersecurity trends, strategies and challenges. The episode emphasizes the importance of adapting to the rapidly changing threat landscape, adopting innovative security practices and fostering collaboration to address the multifaceted nature of cyber risks in the modern digital era.


Clips featured in this episode from the following guests:

Eran Shimony, Principal Security Researcher, CyberArk Labs

Andy Thompson, Offensive Security Research Evangelist, CyberArk Labs

Eric O’Neill, Former FBI Counterintelligence Operative & Current National Security Strategist 

Shay Nahari, VP of Red Team Services, CyberArk

Diana Kelley, CISO, Protect AI 

Len Noe, Technical Evangelist, White Hat Hacker & Biohacker, CyberArk

Theresa Payton, Former White House CIO, Founder & CEO of Fortalice Solutions

Larry Lidz, VP & CISO, Cisco CX Cloud

Matt Cohen, CEO, CyberArk

Charles Chu, GM of Cloud Security, CyberArk

Brad Jones, CISO & VP of Information Security, Seagate Technology

Dusty Anderson, Managing Director, Global Digital Identity, Protiviti

Philip Wylie, Offensive Security Professional, Evangelist & Ethical Hacker

[00:00:00] David Puner: You’re listening to the Trust Issues podcast. I’m David Puner, a Senior Editorial Manager at CyberArk, the global leader in identity security.

[00:00:25] David Puner: Hello and welcome to this special year-end 2023 edition of Trust Issues. As they say in the business, it’s been a year. We’ve seen things like devastating breaches, game-changing AI trends, and identities ? both human and non-human, proliferating all sorts of environments, from the cloud and seemingly throughout the galaxy and beyond.

[00:00:48] David Puner: Kind of like devious gremlins multiplying behind your back or something like that. And in today’s episode, we’re taking a look back at some of our enlightening conversations from the last 12 [00:01:00] months. We’ve had a lot of great guests this year, from CISOs to former White House CIOs to spy hunters, and we hope you’ll find these clips of interest.

[00:01:10] David Puner: Not just as a look back, but also as a look forward. Taken together, conversations with an identity focus, conversations on the cloud, and conversations in and around AI ? all with ties to identity security. They all come back to today’s security challenges that have grown more complex thanks to three driving forces:

[00:01:32] David Puner: New identities, new environments, and new attack methods. And with 84 percent of organizations experiencing an identity-related breach in the past year, the ability to build business resiliency depends upon meeting these three challenges head-on. The security practitioners who have generously given their time to come onto this podcast are all dedicated to protecting their organizations and customers from [00:02:00] identity-based threats.

[00:02:01] David Puner: The following clips are just a sampling from the Trust Issues 2023 cannon. And we hope they’ll inspire you to check out our back catalog to listen to the full conversations and explore others. So, let’s get into it. Here’s our 2023 Trust Issues around-the-horn. Thanks for listening.

[00:02:23] David Puner: A lot of attention in 2023 was given to generative AI, and justifiably so. And our podcast was right in on that action. And when we released episode 20, Hacking ChatGPT, on February 1st, we had something big to share. Our guest was Eran Shimony, CyberArk Labs Principal Security Researcher, who, in an effort to stay ahead of the bad guys, and in the spirit of improving security with AI, had ChatGPT create polymorphic malware. And for this, Eran received quite a lot of attention.

[00:02:56] David Puner: Here he is trying to help us understand if we are collectively prepared to deal with generative AI and the implications it may have for cyber threats.

[00:03:06] Eran Shimony: I believe that we should also think about the usage of ChatGPT in security products as well ? ChatGPT or other chatbots. And I believe that in the future it will be able to provide us better security because as attackers use them or cyber criminals, also defenders can use the same tools to provide maybe better solutions. For instance, maybe in the future, I can ask ChatGPT, please go over the entire GitHub repository of code and try to find bugs that I can later fix. Or maybe it can be an automated thing that it will try to find bugs and also apply patches to them. So the options are limitless and the future is unknown. It’s very exciting, but unknown and a tad bit scary, in my opinion. [00:04:00]

[00:04:03] David Puner: Shifting from a hypothetical malicious activity by our own researchers for the sake of getting ahead of the bad guys to real-life attacks, which unfortunately were and continue to be an all too certain reality ? a couple of months ago, I caught up with our own Andy Thompson, who’s CyberArk Labs Offensive Security Research Evangelist, for a discussion focused on deconstructing ? and the implications of ? two recent high profile breaches:

[00:04:29] David Puner: One targeting MGM Resorts International, and the other involving Okta?s support unit. Here’s Andy Thompson talking about the foundational component of the MGM attack.

[00:04:41] Andy Thompson: The linchpin was the disabling of the multi-factor. And, I mean, it was simply done via social engineering. I mean, I can come up with an example really quick ? you know, you call the help desk and you say, Hey, Bob, I’m so sorry, I was on a fishing trip and I dropped my phone in the water, I got a new phone, but I need to reset my MFA.

[00:05:02] Andy Thompson: Can you hook me up? I mean, it’s just as simple as that. So once they were able to log in and authenticate, that’s when they got access to the Okta backend. And from there, they were able to bolt on their own IDP, which gave them persistence along with the ability to escalate their privilege. And so ultimately that allowed them to get access to any application that Okta had configured.

[00:05:29] Andy Thompson: Some time went by, and they did some stupid things like installing malware on a particular server, and that was actually picked up. That really kicked off the incident response. And so, at that point, IR decided to shut down the Okta instance, trying to, you know, kick them out. Once they did that, they knew the gig was up, but they left MGM with a little gift that they outsourced to ALPHV or the BlackCat ransomware group.

[00:05:57] Andy Thompson: And they pushed probably one of the most devastating types of ransomware called intermittent encryption. They pushed this to their ESXi or their virtualization infrastructure. And so they were running ransomware and were just shutting down hypervisor after hypervisor after hypervisor. These are the servers that actually host the virtual machines that MGM was using for all their systems from reservations, booking, even the slot machines.

[00:06:28] Andy Thompson: And so as the ransomware was going through all those hypervisors. The just systems just started shutting down one after another, after another, after another. And so that really was what kicked off the chaos.

[00:06:45] David Puner: Speaking of ransomware, it’s of course thriving and 2023 has been a lucrative year for ransomware actors to say the least. According to IBM’s 2023 Cost of a Data Breach report, the total average cost of a ransomware attack for victim organizations is 5.1 million dollars. Back in August, I got to talk with Eric O’Neill, the former FBI counterintelligence operative and current national security strategist, to discuss his legendary undercover mission to capture Robert Hanssen, one of the most notorious and damaging spies in U.S. history. As it turns out, O’Neill’s experience with Hanssen ? the OG malicious insider, if you will ? draws intriguing parallels between spies and cyber criminals, shedding light on identity security?s significant role in thwarting insider espionage and defenders continuous push to outpace attacker innovation.

[00:07:44] David Puner: Here’s Eric O’Neill talking about ransomware and its ties to espionage and spycraft.

[00:07:50] Eric O’Neill: Ransomware is not only one of the most damaging cybercrimes today, but it’s also used by spies not just to infiltrate networks, but to breach them, maintain persistence in them, and then take them down. So we worry about ransomware, for example, against critical infrastructure.

[00:08:09] Eric O’Neill: So spies are using it. Cyber criminals are using it and ransomware uses all of these tools and tactics of espionage that have been around for centuries: Deception, fooling people into clicking on links; Impersonation, getting people to trust with impersonation or confidence schemes. So that you trust what you’re looking at.

[00:08:30] Eric O’Neill: And you click on that link or open the attachment and you load the malware yourself; infiltration, obviously ransomware doesn’t just want to land by stealing one person’s credentials. They want to spread through an entire network and compromise as many credentials and systems as possible. And then destruction.

[00:08:46] Eric O’Neill: At the end of the day, ransomware destroys. In fact, I wonder why ransomware attackers don’t just infiltrate, spread, and destroy all data after stealing it, and then ransom it back, as [00:09:00] opposed to giving a key. So ransomware is a great example of spycraft at its best. And the criminals, they’ve learned to do everything the spies.

[00:09:14] David Puner: Today’s attackers employ a business innovation mindset and are constantly upping their game. Just look at the marketplace for ransomware. Strains are made, bought, and sold, e-commerce style. And in 2023, software supply chains continue to be a hot topic. Cascading supply chains, to be more specific. Over the summer, we caught up with Shay Nahari, who’s the leader of CyberArk’s Red Team, for a discussion about attacker innovation and how he and his team as offensive defenders stay on the cutting edge. Here’s Shay talking about cascading supply chains.

[00:09:51] Shay Nahari: We’ve seen supply chain attacks happening for the last 20 years, right? But in the past, we’ve seen it usually reserved for military and defense organizations.
[00:10:03] Shay Nahari: We’ve seen some attacks against critical infrastructure happening from a supply chain. And we’ve seen this also leak into the more financially motivated groups. Some very famous ransomware started from a supply chain attack where the attacker compromised some vendors and from there moved to the consumer level.

[00:10:24] Shay Nahari: Cascading supply chains take it really one step further. So instead of just compromising a single vendor ? let’s say a software vendor ? that allows you to move into other customers or other organizations, in a cascading supply chain, the actors compromise a vendor that provides software to another vendor.

[00:10:44] Shay Nahari: That provides software to another vendor and so on. So it’s kind of further down the chain and that achieves a couple of things. One, obviously wider spread, right? If you are earlier in the supply chain, you have the ability to affect more organizations or more targets. By the way, when I say organizations could also be, we’ve seen cascading supply chains also affecting individual users.

[00:11:06] Shay Nahari: So it could be B2B or B2C types of attacks.

[00:11:13] David Puner: That was Shay Nahari talking about threat innovations in episode 35. For another interesting 2023 guest talking about threat innovations, specifically from an offensive AI attack scenario perspective. And the implications for cyber defenders, check out episode 31 with Lavi Lazarovitz, who heads up CyberArk Labs research team.

[00:11:34] David Puner: And with that, we’re back to artificial intelligence, the AI gold rush specifically, where there’s lots of data feeding the development engine. Data that needs protection so that it doesn’t wind up in the wrong hands. Back in June, I spoke with Diana Kelley, who’s the CISO at Protect AI, a cybersecurity company focused on AI and ML systems.

[00:11:56] David Puner: And we explored the principle of least privilege in AI, the privacy implications of using AI and ML platforms, and the need for proper protection and controls in the development and deployment of AI and ML systems. Here’s Diana Kelley discussing the accuracy and overall understanding of AI at that point in time.

[00:12:17] Diana Kelley: AI hallucinations ? it’s an interesting term for two reasons. One is that it sounds a little bit like it’s, we’ve got sentient AI. Like it’t capable of knowing what’s real and what’s not real. And it’s just sort of hallucinating. And it’s kind of like, Oh, you know, it’s out at Burning Man and it’s having a party and it’s, oh, having these wild images in the desert, maybe.

[00:12:38] Diana Kelley: But what a hallucination really is, is an inaccurate response. It’s not the reality. It’s not the truth. And I think that that can be really confusing. It sounds sort of not that bad. Oh, it hallucinated, but really it’s this tool. This algorithm is not giving an accurate response. And if you need accuracy and response to that question, then that can be truly problematic.

[00:13:04] David Puner: Sticking with the AI thread, which is the thread that kept giving in 2023 ?when I caught up with CyberArk’s resident transhuman Len Noe ? yes, Len is technically a cyborg and we encourage you to check out our back episode that focuses on what that’s all about ? when I caught up with Len in June, the focus on our conversation this time was on the concept of synthetic identity, which involves gathering your publicly available, unprotected data, and then giving it an AI and predictive analytics ride. That’s an overly simple and somewhat vague explanation of it, so here’s Len Noe, technical evangelist, white hat hacker, and biohacker ? and cyborg ? thinking like an attacker, which is what he’s paid to do, explaining synthetic identity.

[00:13:50] Len Noe: What would happen if we took a look at all of the different little bits of information that we leave on the internet through standard use? And what if I was able to actually collect all of that information, almost like doxing someone, then correlate that information through some type of AI chat model, this is all public information, things that you either put out there on the internet, anything from places you’ve lived, vehicle registrations, school attendance, utility bills ? all of that stuff is publicly available. But the problem is that all those little bits and pieces of information are spread out across multiple different databases. So it’s very hard to make correlating points between them. But if you take something like ChatGPT, take all of that information and put it into something that that’s basically its main intended purpose is to be able to find those types of correlations.

[00:14:48] Len Noe: I was actually able to interact with a digital version of myself that had understanding as well as information about my physical existence as well.

[00:15:01] David Puner: That was Len Noe talking about synthetic identity. Definitely check out that full episode when you have a chance for more on the subject, as well as Len’s POV on the implications for cybersecurity and his concerns about sharing personal and proprietary information with AI chatbots and platforms. Also, I wrote an article on Len and synthetic identity, which was a lot of fun to do.

[00:15:23] David Puner: You can find that article, titled, Synthetic Identity: When AI and ML Crunch Your Harvested Data, on the CyberArk blog. As it turns out though, the first time we discussed synthetic identity on this podcast was earlier in the year, when I got the chance to speak with former White House Chief Information Officer and Founder and CEO of Fortalist Solutions, Theresa Payton.

[00:15:48] David Puner: And we had so much to talk about, we released the conversation in two installments. In the following clip, from the second of those two episodes, which was released on March 1st, Theresa takes a stab at a [00:16:00] 2024 cyber prediction, back when most prognosticators were letting the ink dry on their 2023 predictions.

[00:16:08] David Puner: Here’s Theresa Payton talking about synthetic identity fraud and deepfake AI personas and a particular potential threat scenario.

[00:16:18] Theresa Payton: My prediction for 2024 is basically that Franken frauds and deepfake AI personas will actually enter the workforce. Franken fraud is kind of a nickname myself and other people use, but synthetic identity fraud is, is kind of the main term, if you look it up, and basically what’s happening is just to give everybody just a quick little primer.

[00:16:46] Theresa Payton: Without deepfakes and without AI, you have synthetic identity fraud now, where individuals who are very sophisticated in how credit being granted works and applying for things in other people’s names and getting away with it, have a twist on it. And the twist is, they start to apply for different things

[00:17:10] Theresa Payton: in a different name that created sort of a fraudulent identity, but then they layer it on top of your or my legitimate identity. So now all of a sudden, you know, there’s different things layered on Theresa’s identity, but it’s David stuff. So then when people do their automated polls and they see kind of a match and you’re at sort of the lower end of the spectrum on getting credit for different things.

[00:17:41] Theresa Payton: It gets missed and it gets approved and I’m simplifying this ? there’s a little bit more to it. But so take that synthetic identity now and now create a deep fake AI persona, create an image of this new identity, create video, create a person. And you can do all that. You could do all that mostly for free today, add voice to it.

[00:18:03] Theresa Payton: And now the next thing you know, you’ve got somebody who can interview for a job. Many jobs today are remote. And so you may unknowingly hire a deepfake persona because they’ve matched up that deepfake persona with synthetic or Franken fraud. So how do you safeguard against that? Well, for starters, you really do need to understand how to safeguard your executive data and your employee data.

[00:18:33] Theresa Payton: And secondly, if you do remote hiring, a best practice you want to implement now is have an outsourced firm, whatever geography your person is in, have them come into an office, have them present different forms of identification. And it’s not going to cost you that much, but it’s going to be a way to make sure you’re actually hiring the real person you think you’re hiring and not some type of a deep fake Franken fraud individual. And I know people think this sounds too good to be true, but seriously, it can happen.

[00:19:12] David Puner: From Franken fraud and deep fakes to the persistence of threat actors. Back in April, I spoke with Larry Lidz, who’s VP and CISO for Cisco CX Cloud. In his role, Larry thinks a lot about cyber risk and in the context of shifting tolerance levels for risk and how risk influences security decision-making. Here he is talking about how the good guys can learn from the bad guys.

[00:19:39] Larry Lidz: A threat actor tries something, if it doesn’t work, they try something else. And then when they start having success, they grow on that. I think we, as security professionals, have a tendency to think still in big bang, large, project rollouts. And we do this thing where we say this is going to solve all sorts of problems.

[00:20:01] Larry Lidz: Let’s put together a three-year project. We’re going to roll this out across the company, and we’re going to do a, B, C, D and E and F. And then we get through all of that. And first of all, over three years, the threat actors have changed their targets and their methodologies and so forth. So and probably they’ve already figured out how to bypass whatever controls we’re trying to put in place because they’re nimbler than we are.

[00:20:21] Larry Lidz: So how do we change our mindset to think smaller, faster and then grow on that, incubate the ideas, see what’s working ? and if it’s not working, stop and do something different? I think that mindset and that approach, the way they do things, is something we can learn an awful lot from.

[00:20:41] David Puner: Attacker innovation is, of course, constant. And at CyberArk, our mission is to secure the world against cyber threats by securing identities and empowering organizations to unlock growth, innovation, and progress. If that sounds like a rallying call, you’d be right ? and that takes us to this next clip from my interview with CyberArk CEO Matt Cohen.

[00:21:01] David Puner: It was great to get the chance to sit down with Matt last summer for a discussion that ranged from his take on leadership to his transition into the CEO role in 2023 to his insights on identity security and the current threat landscape. Here’s Matt Cohen discussing the challenges customers face as they continue their transition to the cloud while maintaining hybrid environments and how it all comes back to identity security

[00:21:28] Matt Cohen: For organizations that have spent 20 years mapping out how an individual or an identity should be accessing or what roles and permissions they should have for a on-prem environment, they now need to figure out, ?Wait, what does permissions look like when it’s developers who are accessing cloud environments, when it’s traditionally IT professionals that now have to access lift and shift environments?? What is the right access rights? What is the right entitlements? And by the way, what’s the right methodology for how we should enable access? Is it standing access or just-in-time access? Should we have privileges that are always on or should we have zero privileges, zero standing privileges? This opens up a complexity I could go on for, as you can tell, minutes and hours. The idea here is, how do we actually figure out how to seamlessly secure access for all those new identities while applying the right level of intelligent privilege controls, orchestrate the lifecycle, and make sure that we can take care of, we can protect not only on-prem, but hybrid and pure cloud environments.

[00:22:39] Matt Cohen: That is, at its core, identity security. That’s what gets, as you can tell, me excited about what CyberArk is off doing.

[00:22:50] David Puner: Continuing along the cloud thread, in the fall, I had a cloud-focused discussion with Charles Chu, who’s CyberArk’s General Manager of Cloud Security. In that [00:23:00] conversation, Charles talked about the complexities of cloud security and emphasized the need for tailored solutions to protect against evolving cyber threats.

[00:23:08] David Puner: He also answered the pivotal question: Why doesn’t cloud security taste like chicken? Here’s Charles Chu.

[00:23:17] David Puner: Let’s shift over to the matrix for a moment. Earlier this year, you wrote a blog. The intro to the blog began with the movie, the original Matrix, where ? I?ll let you tell it, but it has to do with everything tasting like chicken.

[00:23:33] David Puner: And that led you to talking about the human tendency to equate new things to things we already know. This is a long way of getting to how is cloud security different from traditional security and what’s it all have to do with the matrix and chicken ? and why doesn’t cloud security taste like chicken?

[00:23:59] Charles Chu: Well the the reference is to the movie the Matrix the the original Matrix where a group of them are sitting in that, like, really dungy, grimy kitchen, and they’re like, they’re eating this oozy, gray slop, and like, what does it taste like? Well, it tastes like chicken, right? Everything tastes like chicken. And, and look, there’s, there’s a human tendency to try to recast every new thing in the context of something that we already know.

[00:24:24] Charles Chu: Lots of friends who’ve grown up in the western world, right? What does tofu taste like? Well, tofu kind of tastes like chicken, right? Whereas if you were going to ask my grandmother who was born and raised in China, what’s tofu tastes like? She’d look at you like, tofu tastes like tofu, dude. Tofu doesn’t taste like chicken.

[00:24:41] Charles Chu: Tofu tastes like tofu, right? Because she learned it. and interprets it for what it is. So when we think about the cloud, it’s not helpful to always try to find something that you’re comfortable with. It’s more helpful to think of it as a blank slate. Like we just talked about like, 1400 native services, everything from easy to auto scaling compute to big query, big data analytics, machine learning AI in the middle that you can rent and be using in literally like seconds or minutes, right?

[00:25:26] Charles Chu: There’s nothing like that on prem, right. And so just because you log in to an Azure console doesn’t mean you’re logging into a static version of Windows that you had 20 years ago and you can use like six well manicured Windows admin roles to go manage hundreds of Azure services that are getting updates daily and tomorrow there may be three more, right ? so that type of dynamic environment really calls for something different.

[00:26:15] David Puner: Whether you prefer chicken or tofu or neither with your cloud, when thinking about cloud security, both human and non-human identities, there are different considerations when securing access in the cloud. Back in May, Brad Jones, who’s the CISO and VP of Information Security at Seagate Technology explained why non-human identities are more difficult to control.

[00:26:38] Brad Jones: The non-human aspect is more difficult to control. One, if you set up your, say like your O365 environment and you don’t, explicitly prevent people from adding those services. People can pull plugins and other services and start acting on their behalf. And if you don’t have tight visibility and controls around that, that can quickly snowball.

[00:27:00] Brad Jones: Some of the breaches we’re hearing now are not from the end-user identity. It’s more of these machine or non-human identities where the bearer tokens have been stolen and they’re acting on behalf of that user and the user granted them, perhaps there was some great little tool that helped them put a GIF image in their emails as it went out, yet it gave them read access to their entire mailbox and, hey, that was a pretty important person and now your, your sales figures or your financials are exposed as a result of it.

[00:27:37] David Puner: The intricate web of identities also shapes the modern development pipeline. And in May, I spoke with Dusty Anderson, who’s a Managing Director of Global Digital Identity at the consulting firm Protiviti, about all things DevSecOps ? the secure, modernized state to effectively balance speed, risk, and usability.

[00:27:58] David Puner: And our conversation [00:28:00] covered how and why security by design is both an imperative and a shared responsibility. Here’s Dusty Anderson discussing how modern software development and DevOps practices still follow Henry Ford’s early automobile manufacturing principles of speed and efficiency.

[00:28:19] Dusty Anderson: Someone gave me the example the other day of, DevOps really came from kind of the automotive industry.

[00:28:24] Dusty Anderson: And if you think about the way that we build a car, and it was just like so simple, I’ve lived in this cybersecurity world for so long, you forget about the kind of the simplicity of where a lot of these practices started from. And as you’re building that car it’s, yes, it’s the wheel, it’s the axle, it’s the engine, but someone had to test that engine.

[00:28:42] Dusty Anderson: And that was third-party, maybe that are supplying the engine. So you hope that they’ve gone through their rigorous testing to know that that engine’s going to run and do everything that it was built to do ? all the way down to all the spark plugs that are being put into that car. And then us as a consumer, just expecting that, Hey, this has been certified from ? name your favorite car type ? and you’re going to put your family in it.

[00:29:06] Dusty Anderson: You’re going to drive off assuming that it’s been securely tested and everything like that. Well, that’s kind of the balance of DevOps and DevSecOps as well as, you know, DevOps, they want to build, and they want to build that car as fast and make, make it through the assembly line as fast as possible. And if TechOps is going to say, Oh, wait a second, would you put your family in there yet?

[00:29:24] Dusty Anderson: Like, let’s go along the way and make sure that there’s incremental testing so that if this lug nut is not staying on the wheel, it doesn’t pass Go no matter, you know, it doesn’t have to necessarily do with the engine, but the smallest detail could still cause you to go off the road. And so really thinking through and measuring all, all those details in that build process are really important.

[00:29:50] David Puner: That was Protiviti?s Dusty Anderson. And this brings us to the final clip of this 2023 look-back episode. And it’s probably only fitting that [00:30:00] we end it with a guest clip involving the guest, and that time he wrestled ? a bear. That guest is Phillip Wylie, who’s an offensive security professional and evangelist and an ethical hacker ? and a one time nightclub bouncer-slash-pro wrestler.

[00:30:16] David Puner: I sat down with Phillip for a talk that ventured into hacking for good, how identity figures into penetration testing, and how working in the cybersecurity industry can feel a lot like bear wrestling. Here’s the only Phillip Wylie.

[00:30:32] Phillip Wylie: When I worked in a nightclub, they had an event where they brought in this wrestling bear, and since I was a part-time pro wrestler, they kind of used me to market the event, and so I actually wrestled the bear.

[00:30:43] David Puner: So I’ve seen the picture, and I can’t believe you’re alive to tell the story. Did you have any sort of hesitation about potentially wrestling a bear? And, what was the actual experience like compared to what you were envisioning going into it?

[00:30:56] Phillip Wylie: Yes, it’s like, I really wasn’t apprehensive about it, it was, [00:31:00] the bear was tame, he’s just like a big dog.

[00:31:02] Phillip Wylie: And so when I got in there, I just didn’t realize how difficult it was going to be, because when I was trying to wrestle this bear, I was kind of fortunate that I was the one that went last, so I was able to learn from the other’s mistakes. They were standing upright, trying to wrestle the bear. They would kind of lock up with a bear, you know, grabbing a hold of the bear, trying to take the bear down and the bear would grab their legs and take them out, take it out from under them.

[00:31:25] Phillip Wylie: So in the picture, you can kind of see that I’m been a type of defensive lineman stance or wrestling stance, got my feet out. So my center of gravity would be more advantageous for me and make it more difficult for the bear to take me down because my legs were too far out with those short limbs. He had to reach out to try to grab me and he just, he wasn’t able to take me down.

[00:31:46] Phillip Wylie: So it was an interesting experience. I mean, it was amazing. It was like trying to move a parked car. It was just really hard to get that bear to go down. But in hindsight, it’s probably a good thing because at that point, no one had ever taken that bear down. So what happens to the first person that takes that bear down?

[00:32:04] Phillip Wylie: You know, even though it’s tame, it’s not like you domesticated cats and dogs. This has been years or centuries of domestication and they don’t freak out like that. But this animal was probably the first domesticated bear in his family line. And so it would probably revert to its animal instincts. And, you know, if it would have taken it down, it might’ve freaked out.

[00:32:25] Phillip Wylie: I could have been injured. So, fortunately, that didn’t happen.

[00:32:30] David Puner: Well, needless to say, you’re the first Trust Issues guest ? that we know of ? who has previously professionally wrestled and wrestled a bear. Other than it being a fascinating story, I think one of the reasons why I’m asking you about it is because it’s interesting that you’ve made that transition from that kind of a career to where you are now.

[00:32:52] David Puner: Are there any connections that you’ve seen between wrestling and offensive security?

[00:32:57] Phillip Wylie: Yeah, some of the things that would help. From wrestling and [00:33:00] offensive security is the acting piece, because as everyone knows now, wrestling is, you know, I don’t really like to use the word fake because some of the things that happen there are, you know, you’re getting thrown around and some of the things that you can get injured, but it’s a staged ending. So they know who’s going to win. But one of the things I would say that would help ? I didn’t really get to do that enough because I was on the losing end when you’re starting out, you’re what they refer to as a job boy or a jobber. You know you’re going to get beat. You’re going in to make the star look good, because in these matches, they’ll have several of these smaller matches and they kind of build up the stars where they go in and just beat the tar out of whoever they’re wrestling. And, and that’s it. So you make them look good. And then you?ve got the main events where you got two of the stars actually wrestling. So if I would have had more experience in the area of the acting piece would have probably helped more with social engineering, because with social engineering, you’re having to act, you got a pretext, you’re trying to emulate some other type of person, it [00:34:00] could be help desk, it could be a manager, it could be a third-party that’s doing business with that company. So that acting part would have helped in being used to a crowd and stuff like that. Your nerves should be a little more calm in that type of situation.

[00:34:20] David Puner: That guy wrestled a bear! Well, there you have it ? a tour around the Trust Issues 2023 archive. Thanks to all of our great guests. Unfortunately, we couldn’t highlight everyone here and keep this episode moderately lean, but we enormously appreciate every guest who sat down with us this year. We wouldn’t be here without you.

[00:34:40] David Puner: And thank you to you, our listeners. Whether you’re just finding us for the first time or you’ve been with us since episode one, we’re grateful for your listenership and look forward to continuing to serve you great episodes in 2024. Please check out our entire back catalog at cyberark.com/podcasts.

[00:35:00] David Puner: Or, of course, wherever you get your podcasts. And be sure to subscribe to Trust Issues on those platforms so you don’t miss an episode. From all of us here at CyberArk, we wish you a safe and happy new year. See you in January 2024 with new episodes.