March 1, 2023

EP 22 – Deep Fakes, ChatGPT and Disinformation: Theresa Payton on Evolving Digital Threats (Part 2)

Today’s episode is part two of our conversation with former White House CIO, bestselling author and founder and CEO of Fortalice Solutions, Theresa Payton. If you missed part one, you can start here and go back to that episode. Or, you can start there and come back to this one – but you’re already here, so maybe just stick around?
 
In this episode, host David Puner and Payton continue their discussion, diving into the implications of AI and tools like ChatGPT for the cyber threat landscape – and the potential threats posed by deep fakes backed by synthetic identities. Also, could AI tech make it easier for bad actors to spread disinformation on a large scale? 

[00:00:00.000] – David Puner
You’re listening to the Trust Issues podcast. I’m David Puner, a senior editorial manager at CyberArk, the global leader in Identity Security.

[00:00:22.730] – David Puner
Welcome back or welcome for the first time to another episode of Trust Issues. Thanks for checking us out. Today’s episode is part two of my conversation with former White House CIO and current CEO of Fortalice Solutions, Theresa Payton.

[00:00:39.260] – David Puner
If you missed part one, you can start here and go back to that episode, or you can start there and come back to this one. But you’re already here, so maybe just stay? Let’s get right back into it with Theresa Payton.

[00:00:54.360] – David Puner
Now we’ve shifted to talking about machines, and that leads to AI, which is something that I’ve been excited to talk with you about. I’ve done a lot of thinking and reading about ChatGPT as I’m sure everybody really has at this point. I was reading your book, your last book that is Manipulated, which came out in 2020.

[00:01:19.480] – Theresa Payton
When all bookstores were closed, yeah, it was a good time to launch a book.

[00:01:23.440] – David Puner
In it, you got into, among other things, AI, Deep Fakes, and synthetic IDs ChatGPT, which seems to be a significantly advanced development in this realm. It seems like a momentous development and very exciting and scary at the same time. What’s your take? Are we as a society prepared to handle the implications that can potentially bring Deep Fakes and manipulation campaigns and the threat landscape in general?

[00:01:54.120] – Theresa Payton
We’re not prepared. Propaganda campaigns have been around since there were two human beings walking the Earth. Propaganda isn’t always bad. Sometimes it’s to help society understand a way of thinking needs to evolve and to move forward.

[00:02:13.100] – Theresa Payton
Propaganda, in and of itself, it’s got a bad connotation today, but it can actually be something that’s positive. Things like a propaganda campaign around getting America’s kids fit and healthy and making good food choices. I think we’d all agree that is a great campaign. It could be virtuous if it’s done the right way.

[00:02:36.260] – Theresa Payton
The problem with propaganda campaigns now, because like you mentioned, Deep Fakes, AI tools now like ChatGPT, and other tools that are all on the surface meant to be for good things and altruism, they’re always misused and misaligned by people with nefarious intents. We’ve known for years that this is part of human in nature to manipulate and misinform others to get them to see your point of view.

[00:03:06.160] – Theresa Payton
Then basically, we gave them a free stage with social media, and all they had to do was understand how search engine optimization and hashtags worked, and next thing you know, you’ve got your own amplification marketing campaign.

[00:03:23.350] – Theresa Payton
I mean, in some regards, I think retailers and companies would really marvel at what some of these misinformation experts are actually able to accomplish with very little money. This has really led to this rapid spread of misinformation on all types of topics, everything from pump-and-dump schemes for cryptocurrency, health, politics, science, and Deep Fake technology.

[00:03:47.820] – Theresa Payton
All of these can be used to create fake news, propaganda, and disinformation. Now with artificial intelligence, we already saw where artificial intelligence can help botnet masters much more effectively manage their botnets. We’ve already seen where social media platforms try to ensure that you’re not dealing with a bot, and having a tool like ChatGPT and other Deep Fake tools and AI tools really just puts more power in the hands of the manipulators and the misinformation and disinformation peddlers. It’s making it really challenging.

[00:04:28.020] – Theresa Payton
I think on the hopeful side, each and every one of us being a community and watching out for each other and spotting and stopping these misinformation and disinformation campaigns, reporting it to social media platforms of inauthentic behavior or somebody pushing misinformation disinformation that’s dangerous to somebody’s health or well being. I think what’s really encouraging is you’re starting to see come out of big tech, out of academia think tanks where individuals are saying, you know what?

[00:04:59.200] – Theresa Payton
I think we can create models to detect AI Deep Fakes bots much better than we’re doing today. But we’re back in the situation where this is an arms race. Is it going to be the good guys on the side of authenticity and authorization and access controls, really rooting out what’s human behavior and what’s not? Or are the bad guys going to win this one? It may just be a leapfrogging type of exercise for many years to come.

[00:05:32.340] – David Puner
The potential for the stakes to have gotten much higher are there. It’s just a matter of what’s going to emerge from this? Obviously, propaganda that’s huge and has huge ramifications, but what about the potential for tax to be perpetrated via AI bots or whatever it may be?

[00:05:52.220] – Theresa Payton
That potential is there. I did a prediction for 2023, back in 2021, that AI would start to launch misinformation and disinformation campaigns as well as cyber crimes without human intervention.

[00:06:08.250] – Theresa Payton
Basically, an engineer would set it up, do the machine learning, and the behavioral based analytics. Really, a lot of what people call AI is not true AI. It’s more machine learning, but really get sophisticated enough in building the algorithms that they would actually start launching things on their own.

[00:06:29.240] – Theresa Payton
I hope I’m wrong, but I don’t see anything in place right now that can detect and stop AI from launching, whether it’s a botnet attack, a ransomware attack, or some type of a misinformation or disinformation campaign against an individual, against an organization based on trending hashtags. If you think about it, the engineers who set it up, they have a lot to gain if these AI algorithms are successful.

[00:06:59.220] – Theresa Payton
If they are able to get something to go viral and they’ve got ad campaigns hiding behind those, they can get pennies on the click. If they destroy the reputation of a company, that’s something that nation states are always very interested in having access to. Then, of course, any of the garden variety of different types of cyber attacks, if they can train AI to launch that without basically human intervention, that would be very popular on the cyber crime platforms where you can do cyber crime as a service.

[00:07:35.900] – David Puner
In the book, you mentioned, “Fraudsters and scammers will leverage cutting edge deep fake AI technology to create clone workers backed up by synthetic IDs.” We obviously are focused on identity security. What are the implications for fake identities, and how does identity security potentially come into play here?

[00:07:58.940] – Theresa Payton
Yeah, I mean, cyber, you got to get busy. This is where your human user stories are so important because you can look for all the different opportunities to say, well, this is a machine-to-machine interaction. How do we do the right level of identity access management? How do we maintain it? How do we do continuous monitoring of that access control, anomalous behaviors, and then you could do the human-to-machine and all of those different types of interactions and decide, again, based on your data classification and governance, where you need to be the strictest and where you need to be the strongest.

[00:08:32.900] – Theresa Payton
My prediction for 2024 is that basically that Franken frauds and deep fake AI personas will actually enter the workforce. Franken fraud is a nickname myself and other people use, but synthetic identity fraud is the main term if you look it up. Basically, what’s happening is just to give everybody just a quick little primer. Without Deep Fakes and without AI, you have synthetic identity fraud now where individuals who are very sophisticated in how credit being granted works and applying for things in other people’s names and getting away with it have a twist on it.

[00:09:19.640] – Theresa Payton
The twist is they start to apply for different things in a different name. They created a fraudulent identity, but then they layer it on top of your or my legitimate identity. Now all of a sudden, there’s different things layered on Theresa’s identity, but it’s David’s stuff. Then when people do their automated polls, and they see a match, and you’re at the lower end of the spectrum on getting credit for different things, it gets missed, and it gets approved.

[00:09:58.520] – Theresa Payton
I’m simplifying this. There’s a little bit more to it. But take that synthetic identity now and now create a Deep Fake AI persona. Create an image of this new identity, create video, create a person, and you can do all that. You could do all that mostly for free today. Add voice to it, and now the next thing you know, you’ve got somebody who can interview for a job. Many jobs today are remote, and so you may unknowingly hire a Deep Fake persona because they’ve matched up that Deep Fake persona with synthetic or Franken fraud.

[00:10:37.860] – Theresa Payton
How do you safeguard against that? Well, for starters, you really do need to understand how to safeguard your executive data and your employee data.

[00:10:49.040] – Theresa Payton
Secondly, if you do do remote hiring, a best practice you want to implement now is have an outsource firm, whatever geography your person is in, have them come into an office. Have them present different forms of identification. It’s not going to cost you that much, but it’s going to be a way to make sure you’re actually hiring the real person you think you’re hiring and not some type of a Deep Fake, Franken fraud individual. I know people think this sounds too good to be true, but seriously, it can happen.

[00:11:24.320] – David Puner
Right. If you do hire one of these Frunken fraud identities, is the point of their doing this to get in and get access, or is it because they actually think that they can get on a payroll and start collecting salary somehow?

[00:11:40.440] – Theresa Payton
It’s different motives. In some cases, the motive is for insider threat, getting hired onto a certain project, getting hired to interact with the group that works on certain technology. As themselves, they may not pass muster, but as a synthetic identity, they might. You may say, well, they’re not a computer engineer. They’re in an administrative role supporting or research analyst. If you think about today, you can’t put everybody through the same level of background check.

[00:12:19.960] – Theresa Payton
You don’t put the CEO and your lowest-level individual through the same exact scrutiny of a background check. It would bog down the process and be very expensive. Right now, because of the level of technology they have access to, they tend to go for the lower level positions, remote work. Maybe you’re doing transcription, maybe you’re typing up notes, maybe you’re doing spreadsheet analysis, but every work-at-home job that you create could potentially be a target for this. You’re going to need to have some way of screening and making sure you’ve got legit people with legit backgrounds working for you.

[00:13:02.860] – David Puner
It all sounds so crazy, but it’s so real. I guess shifting over to regulations, increased regulations. How do you think various cyber regulations around the world that are either rolling out or recently rolled out will make an overall impact on cyber security and international cyber attacks?

[00:13:22.620] – Theresa Payton
Regulations can be helpful because it’s a flag on the top of the mountain that everybody has to go achieve, and everybody talks in the same vernacular. Sadly, I think a lot of the time, it’s payday for consulting groups to say, we do that and we’ll help you to achieve your compliance. There’s some goodness that comes out of regulations because it gets everybody talking the same way. And it’s like a checklist. Everybody can rally around and have a roadmap and have a maturity lifecycle and milestones and make progress.

[00:14:02.430] – Theresa Payton
But my challenge with most regulation is based on preventing terrible events that have already happened, and so by the time it’s written, it’s really obsolete. Cybercriminal tactics have moved on, technology has moved on, how we use technology has moved on, and we’re litigating backwards stuff that’s already happened, and so it makes it hard to make progress when you’re always doing a checklist on something from the past. You see our laws don’t keep up with the technology. Our court system for the victims certainly doesn’t keep up with technology. That’s my challenge.

[00:14:48.740] – Theresa Payton
I often say sometimes the laws and the regulatory frameworks in some regards were the worst thing to happen to true cybersecurity because you spend so much time making sure you’ve checked off every single thing on the box that you haven’t got any time left for creatively thinking through, okay, what else could be a problem?

[00:15:15.260] – Theresa Payton
If you’re tracking some of these recent data breaches, I would bet you if you asked them the last time they had an audit, did they pass their audit against the regulatory frameworks? They probably did. It’s like a health checkup. It’s like you get your annual physical and you could get sick two days later, even though you get a clean bill of health. Our systems are very much in that same analogy of it’s a point in time. We don’t do continuous monitoring of compliance against laws, and in some regards, the tools help with that.

[00:15:55.010] – Theresa Payton
On paper, that sounds amazing for everybody to think about doing. But in practice, it’s really hard. You’re just achieving zero trust architecture. It’s hard. It’s a lot of talking. It’s a lot of changes. As somebody who had to implement systems and encrypt data, it’s hard.

[00:16:13.110] – David Puner
Yeah.

[00:16:13.760] – Theresa Payton
All of the things that I’m saying here for the people who are listening to me and have to implement these things, I see you, I hear you, and I have walked in your shoes. I know what I’m recommending is not easy. That’s why I always get very pragmatic and say, do your data classification. When you focus on these remedies and these strategies, focus there first.

[00:16:41.180] – Theresa Payton
It’s the same thing with regulation. The regulation, for example, GDPR. If you think about in the United States, you’ve got CCPA out of California, you’ve got some of the laws that have come out of New York. Canada has got a whole level around privacy considerations that are slightly different than GDPR.

[00:17:00.930] – Theresa Payton
Again, these are all well-meaning and very important because we do need a flag to go look at and say, as a team, we’re all going to make it to that flag, and we all have to participate in making sure the organization gets there. But if all the time is spent getting to the flag and there’s no time left to do the creative work, you’re still going to have a catastrophe on your hands when you have a breach.

[00:17:26.220] – David Puner
When you work with clients as the CEO of Fortalice Solutions, how often are you stepping into these meetings for the first time where worst-case scenarios happened, and they don’t know where to start?

[00:17:39.400] – Theresa Payton
We only do incident response for clients that we work with. It’s a very crowded market space. There’s some amazing vendors and colleagues out there that that’s the majority of what they do is incident response, and for me personally, meeting somebody for the first time in their darkest hour, that’s a really hard time to meet somebody for the first time.

[00:18:03.800] – Theresa Payton
It’s better to have organization sometimes that that’s all they do. That’s all they think about and they move on to the next incident. But for our clients who we know and we work with, and they hit that darkest hour, yes.

[00:18:19.740] – Theresa Payton
One of the things I’m particularly proud of is we have not had a person get fired because something bad happened. Because a lot of times I spend time with the executive team talking about, what are your worst nightmares. Why don’t we just voice those out loud? Then let’s talk about mitigating strategies to make those worst nightmares not so bad, and then let’s get that roadmap in place. Rarely do I see true brazen negligence where somebody really should be fired. It’s best faith efforts doing the best they can with the resources they’ve been given.

[00:19:02.120] – Theresa Payton
But one of the things, when these terrible things happen, is really sitting down and saying, we have a playbook for this. Where do you want to start with the playbook? That’s why those incident response playbooks are so helpful. They’re not going to be the exact recipe in the middle of the incident, but they are going to guide the conversation. For example, hey, we said in the playbook, we did all kinds of scenarios and said, we’re never going to pay the ransom. Why are we talking about paying the ransom when we did this playbook, and you all said no, no, no, no, no, no? What’s the new scenario that we didn’t think of here? Then in some some cases are like, you’re right, we’re not going to pay the ransom, but the insurance company says they want us to.

[00:19:51.760] – Theresa Payton
To me, having that playbook and having had those conversations before the emergency, even if it’s not the exact thing you rehearsed, the organization will have the muscle memory to not only survive but on the other side of it, thrive. If you’ve done your homework and you’ve thought about it, and through your best faith efforts, you still have an issue, my biggest advice I always give to the executive team is be as transparent as you can, as soon as you can without tipping your hand to the cyber criminals or other criminals. Transparency, ethics, following the playbook is huge.

[00:20:39.600] – Theresa Payton
I also tell executives that before a bad incident happens, that’s actually the time to let your customers know. We actually have a process. If we ever experience a data breach and your accounts are of grave concern, these are the ways we’ll contact you, and we will not contact you these ways. By letting your customers know that there’s a certain way that you will do outreach and there’s a certain way for them to get in contact with you, you can help avoid those opportunistic fraudsters when that breach happens.

[00:21:18.460] – Theresa Payton
It also shows your customers you’re thinking about this. I find that some of the organizations I work with who have been through really tremendously difficult times and outages, that people gave them grace and space. On the other side, they were able to survive that horrible, horrible situation and on the other side, thrive.

[00:21:43.860] – David Puner
Thank you for that. Really interesting. How are CISOs and other corporate protectors fairing in general? It seems like they’re in very stressful positions, and the 10-year span doesn’t seem all that long in general. Do you consider them to be protectors? What advice do you have for CISOs amid today’s landscape?

[00:22:12.100] – Theresa Payton
We have to be a student of our jobs. The job is changing every day because the technology, the human news, our stories change every day. That means opportunities for cybercriminals and fraudsters change every day. The role of the CISO continues to evolve, and one of the things I would encourage a CISO to be thinking about is you will never have the full span of control that you probably should have to protect all of the digital assets under your organization’s care. You really do need to understand that, and once you understand that, ask yourself how you can influence organizations so that there’s security by design.

[00:23:06.100] – Theresa Payton
I’ll give you an example. You’re never going to have enough tools and budget and people. Even if you could hire every rec you wanted to and everybody said, I’m quitting my job and I’m going to go work for this one CSO with this one organization, you’re never going to have the span of control you think you should have to make sure everything’s secure. What we really need to be thinking about and influencing is that security by design. Just at Immaculata University recently, and we actually have an offensive cyber operations course available to the public and the students, and it dawned on me because we were talking about engineering and all the different majors and how some of those majors are now getting a certificate in cybersecurity.

[00:23:53.250] – Theresa Payton
I said it’s interesting. Engineering and architecture has been around for hundreds and hundreds of years. We don’t build a building and then say, well, now that it’s mostly built, why don’t we bring in the people who think about building security and ask them what they think about earthquakes, fires, floods, and high winds? We’ve learned the hard way with bridges collapsing and buildings collapsing. That’s not the way to build a building. We are still in the ammature stages of how we think about building applications, whether it’s mobile apps, whether it’s internet, whatever it is, we are still in those ammature stages where we say, Hey, now it’s time to get the CISO in here. Even if they haven’t built anything, it’s already too late. Is the CISO going to sit with the developers the whole time? Is the CISO going to sit with the third-party marketing firm doing internet campaigns with social media platforms? How do you make sure you have influence everywhere within the organization? You don’t do it by being a bottleneck and saying no to everything. You do it by figuring out what is the mission and the value set of our organization?

[00:25:15.640] – Theresa Payton
Based on those human user stories of our employees, our customers, our third-party vendors, based on those human user stories, where do things need to be secure? How do I influence the people responsible for that human user story? That is where CISOs need to go with their thinking. If you don’t wake up every morning thinking about that in addition to what you’re doing today, you’re going to miss a unique opportunity to have a long-lasting legacy and impact.

[00:25:49.920] – David Puner
You’re talking about the CISO and the CISO’s team, and it gets us to cybersecurity professionals. Of course, we talk a lot about the cybersecurity talent gap, the shortage. What do you see as the primary challenges when it comes to the cybersecurity talent gap?

[00:26:08.250] – Theresa Payton
What’s interesting is I see it less as a talent gap. We have a creative gap in taking current applied work experiences or school experiences and translating that into filling a cyber security role with the proper mentorship, coaching, and trainings. We actually have an experience gap, and part of the reason why we’ve been developing training for years is I really want to be a disrupter in removing the barriers of entry into cybersecurity.

[00:26:48.870] – Theresa Payton
For example, having to achieve a degree and or a certification in cybersecurity today, it basically eliminates people who don’t have the economic access to afford a two-year degree, a four-year degree, a certification. They don’t have time to step away from the job to study for the certification, pay for the boot camp, pay to test, and retest.

[00:27:18.530] – Theresa Payton
The failure rate when somebody takes some of these certifications the first time is 60%. We as an industry should not be applauding ourselves for that. Why are we eliminating people from the industry? It’s almost like we created some cool kids club and we won’t let anybody else in unless they’re as cool as us. We have to open our mindset.

[00:27:42.220] – Theresa Payton
It starts with the job descriptions everyone’s writing. Most job descriptions that I read have the alphabet soup of degrees and certifications. They’re all in search of the same person. I see these job descriptions saying a junior role needs five years of experience. Where are they supposed to get that experience? I read these job descriptions, and I got to say they are soul-crushing, and you are eliminating, from a DEI perspective, so many people.

[00:28:14.620] – Theresa Payton
When people ask me why there’s a talent gap, I have to push back on them and say there’s a creativity gap in hiring managers. We really have to think differently. If you look at my organization, whether it was the people who worked for me in banking, the White House, and today at my company, not everybody has a degree. Not everybody started off in cybersecurity. Some people were music majors.

[00:28:45.000] – Theresa Payton
Some people were in the military or law enforcement, but I saw something in them. I saw an insatiable desire to problem-solve. I saw a desire to contribute to the greater good, to fight back against cybercrime, to under our watch, not have other victims. When I worked in financial services in the White House, I also had technology deployments to do as well.

[00:29:12.640] – Theresa Payton
I saw a desire where people wanted to take their skills and truly help other people through those human user stories using technology. You can train, coach, and mentor. Now, that’s not to say you can have a whole team of people who are all inexperienced and say, yay, now we’re going to be a great team. Of course, you’ve got to have different levels of experience. But why are all your job openings? Why are they all written that way? Why do you have such barriers to entry? Why don’t you look for an opportunity to take a chance on people? You’ll find in the process.

[00:29:50.430] – Theresa Payton
People say to me, how do you have such diversity, equity, and inclusion at your small company? I’ll tell you, my team knows I say to them, I will not start interviews until I see a diverse slate of candidates, and from all different walks of life, all different backgrounds. Now, I do have some clients, they tell us, especially the government clients, they tell us exactly what they will accept on their billets. But as it relates to the other work, we have more leeway, and so we’ve made a commitment to that.

[00:30:26.230] – David Puner
Really interesting and really inspiring. Theresa Payton, it’s been an honor. Thank you so much for all the time you’ve given us. Where and when can folks, if interested, go and read your new predictions for ’23 ’24?

[00:30:44.680] – Theresa Payton
Yes, we’re in the process of getting those posted, so stay tuned for more on our Fortalice Solutions blog. I love trust issues, but I always say it’s not a trust issue or paranoia if it’s true.

[00:31:00.740] – David Puner
It’s just been really interesting and fun. Thank you for all your time. I really appreciate it.

[00:31:07.300] – Theresa Payton
Well, thank you, and keep up the great work.

[00:31:19.520] – David Puner
Thanks for listening to today’s episode of Trust Issues. We’d love to hear from you. If you have a question, comment, constructive comment, preferably, but it’s up to you, or an episode suggestion, please drop us an email at trust [email protected], and make sure you’re following us wherever you listen to podcasts.