September 11, 2024

EP 61 – Put Your Name on It: Identity Verification and Fighting Fraud

Aaron Painter, CEO of Nametag, joins host David Puner for a conversation that covers several key themes, including the inadequacies of current identity verification methods, the rise of deep fakes and AI-generated fraud – and the importance of preventing identity fraud rather than merely detecting it. Aaron discusses the role of advanced technologies like cryptography, biometrics and AI in improving identity verification. He also highlights the critical issue of social engineering attacks at help desks and the need for trust in digital interactions.

Aaron stresses the importance of preventing identity fraud by using a combination of cryptography, biometrics and AI, rather than solely relying on detection methods. He also touches on the challenges of verifying human identities and the need for platforms to verify their users to create safe online communities.

Put your name on it and give it a listen! 

David Puner: [00:00:00] You’re listening to the Trust Issues Podcast. I’m David Puner, a Senior Editorial Manager at CyberArk, the global leader in identity security.

Hello and welcome to another episode of Trust Issues. Today I talk with Aaron Painter, who’s the CEO of NameTag, an identity verification platform. As Aaron tells it, his path into the field began during the pandemic when he helped friends and family who had their identities stolen. This personal experience led him to focus on creating more secure and reliable identity verification methods for enterprises.

In our conversation, he shares insights into the inadequacies of current identity verification methods, the rise of deepfakes and AI-generated fraud, and the importance of preventing identity fraud rather than merely detecting it. He also discusses the role of advanced technologies like cryptography, biometrics, and AI in improving identity verification.

One might call this episode a hearty snack for the ears. Here’s my conversation with Aaron Painter.

Aaron Painter: [00:01:00] Thanks, David. It’s fantastic to be here with you today.

David Puner: Really great to have you. Thanks so much. So, NameTag is an identity verification platform, and your world, like ours, revolves around identity. How did identity become a focus for you and how did it, along with your previous career experiences, ultimately lead you to your current role?

Aaron Painter: [00:02:00] Yeah. Thanks. For me, it was deeply personal. At the start of the pandemic, like many folks, our lives had moved digital, branches were closed, and in-person things were closed. I had a bunch of friends and family members who had their identities stolen. I said, “I’m going to be a good friend. I’m going to be a good son. We’re going to jump on the phone, call these customer support numbers, and figure this stuff out.” Everyone we called would say, “Well, I need to identify you and I need to ask some security questions,” and they asked one or two things. It turns out those questions were often easy to answer, and the reality had become that someone had called before us and answered those very questions, taking over our accounts. That’s what led to all the destruction that I was helping my friends and family members clean up.

David Puner: [00:03:00] Wow. What happened? Did they lose a lot of money? Did they get all their airline points taken away? What happened?

Aaron Painter: It was a pretty big mix—a broad range of things, from financial accounts to insurance frauds, to unemployment fraud claims. Different people, different things. It felt like the world was ending, particularly for those few weeks. This just added to it because it was like, “Wow, even the digital infrastructure that runs our lives, we’re somehow not able to trust right now at a time when we really have digital-only relationships.” So, it was scary.

David Puner: [00:04:00] Yeah, I would think so. So, you were just doing this in your free time to help out friends and family. So then, what happens? Was there one of those, as they like to say in the biz, aha moments?

Aaron Painter: Well, we reflected a lot, particularly in financial services. We asked, “How is it that when you open a bank account, they ask you for some sort of proof to know your customer (KYC) for financial regulations?” We need to verify that you are who you claim to be when you’re opening this account to know your source of funds. So, practically, if you do it remotely, that often means scanning an ID and taking a selfie. People have done this a lot; it’s becoming a more familiar flow. We said, “Why is it that a bank will ask someone to do that, but yet, when you’re trying to transact with the bank, they don’t rely on that information? They don’t ask you to do that. They default to these security questions.” That’s what led us to understand the underlying technology and where there might be gaps to understand why companies aren’t relying on that kind of identity verification experience for secure transactions and secure workflows.

David Puner: [00:05:00] Okay. So, what were you doing as a career at this point, and how does that segue into going pro, as it were, in the identity space?

Aaron Painter: Now we focus on protecting people’s accounts and trying to prevent that very call I had, ironically, with so many of my friends and family members when we were trying to recover their accounts. We go one click before that and ask, “Can we prevent this attack from happening?” Crazy enough, it turns out to be the attack of the moment. A lot of this started late last summer in the world of MGM, the casinos in Las Vegas. 60 Minutes even did a piece on them—it kind of got that mainstream for a cybersecurity incident. Right? But the concept that someone called the employee IT help desk, in that case, pretended to be the employee, answered some questions, and an innocent help desk rep did their best and ended up resetting access for the bad actor. It turned out to go in and deposit ransomware and eventually took the company down for almost two weeks.

David Puner: [00:06:00] Right. We did a Trust Issues episode on that with Andy Thompson, who’s our in-house guy. He did a pretty good dive on that one as well. A pretty big incident, obviously.

Aaron Painter: No, that is the scenario. That concept of social engineering, as we now call it in modern terms, or, in a traditional way, we might have called it being a con artist. Social engineering at the help desk has become one of these critical issues of trust—a moment when you really need to trust who the rightful account owner is. Unfortunately, that’s become the way that cyberbad actors are taking over people’s accounts today. It’s by far the leading threat factor at the moment.

David Puner: [00:07:00] So, just by way of background, what’s your career trajectory been like? I know you spent a lot of time with Microsoft in China. How does Aaron Painter get to be where he is today?

Aaron Painter: Gosh, that’s a deep question in many ways from a life perspective, but from a work perspective, it’s a little more straightforward. I spent most of my career—actually 14 years of it—at Microsoft. Several years after that, I ran a cloud computing consulting firm based in the UK. So, most of my career was outside the U.S., living and working in different markets, and I helped run international expansion at Microsoft, opening Microsoft into new countries around the world and emerging markets. I ran the Windows franchise in Brazil for a few years, and ultimately, I ran the enterprise business for Microsoft in China for five and a half years, specifically three and a half years based in Beijing. It was just this wild period, particularly in China at that time, where anything’s possible. There was just this wild appetite toward growth—figuring out new things. We saw a huge boom in new consumer technology companies coming into the market. So, for me, it was this spirit where, “Hey, it’s okay to invent. It’s okay to try new things. It’s okay to come up with new solutions to problems.” That definitely stuck with me. More fundamentally, I’d say so much of my career has been working with large enterprises and helping them think about security issues, customer engagement issues, technology, and infrastructure. So, when I encountered this personal life problem, my frame and context were, “Gosh, this is a problem I know enterprises want to solve.” I know the help desk reps didn’t get into this line of work to be identity interrogators. They got into the line of work to help people. There must be a solution where technology can help.

David Puner: [00:08:00] And at the end of the day, of course, all these individual consumer experiences, like the one you mentioned earlier with the identity theft, these consumers are employees at large organizations or they are, one way or another, having to verify themselves within organizations. So, it all does come back oftentimes, at least sometimes, to the individual level to then magnify it at a larger scale. So, identity—now we’re back to identity, and we talk about it here all the time. But for the sake of understanding your POV on it, toward the top of this conversation, how do you define identity and why is it so critical in today’s cyber landscape?

Aaron Painter: I think of identity as proof that you are who you claim to be. I think of it as critical because it’s so much of, almost like the title of your podcast to me is about trust. I really feel that we have a deficit of trust in today’s society. So many of the different experiences we have in life are governed by these different online accounts and the ways that we log in and access them. Whether that’s your work account—you really sit down and log into work each day, right? You might not often show up in person, or even if you show up in the office, you’re then logging in, right? Or the social media accounts you might live with in your life, or the bank accounts, or communities, or gaming accounts—whatever it is you’re into as a person, your interests, your personality, so many flavors of who we are as people all start with this logging-in experience. To me, that’s a fundamental moment when we need to make sure we can trust the interaction we’re having.

David Puner: [00:09:00] So I’m going to ask the Rube question here, which is, “Well, I’ve got MFA. Isn’t that enough?”

Aaron Painter: MFA is only as strong as the reset process in place for it. We’ve seen a lot of pressure in recent years, even from the White House executive order a few years ago, and multiple industries—FBI, Department of Health and Human Services—suggestions in healthcare, industry of industry, national infrastructure. So many folks have been advising industry that multifactor authentication, or adding a layer of MFA on top of a password, is a critical next step. It is a fundamental necessity because passwords themselves are no longer something we can rely on solely. The issue with MFA is that in the push to roll it out and put it in more places, we’ve created friction and some user frustration. But there’s also this little-known change that happened, or little-realized, and it’s that the average login website page, where you would type in your username and password, often had a button that said “Forgot my password.” If you clicked on it, you could reset your password. When we added MFA to these accounts, that button basically went away. The only way to reset access, if you were locked out of MFA because you got a new phone, upgraded your phone, or lost your phone, was to call someone or launch a support ticket with a help desk. So, we pushed what was this secure concept of encryption and how we log in, MFA, and the technological benefits of that. We basically pushed it to be a human process of you being the one who’s locked out asking to get back in, or someone else able to call and pretend to be you and ask to get back in on your behalf. That’s the concern.

David Puner: [00:10:00] Right. And that, of course, factored into the MGM breach you mentioned earlier. So, adaptive MFA is a good way of looking at that to a certain degree. But then, to get back to identity, you mentioned to me in advance of this conversation that identity is often misunderstood. What about it is misunderstood, and why do you think that is?

Aaron Painter: I think it’s a very big category. The way that people use the term—identity, first of all, as a fundamental word—can mean so many things. How we identify ourselves, how we come across, what we wear in public, how we want to be seen, concepts of gender—there are so many elements of the word “identity” and who we are. When I think of it, I think of it as proof that we are who we say we are, particularly in these digital contexts. So often, for most people, that’s an email address, a social media login, or, call it, even the password. It’s so lightweight. Fundamentally, you know, “Put in your phone number, therefore I have identified you. Put in the email address you choose to go by, and therefore I have identified you.” I had one customer support issue the other day; I emailed them for help, and they replied back to me and said, “For security reasons, to identify your account, can you please tell us your email address?” Literally, I emailed them. It’s pretty clear what my email address is, let alone I had a bit of background.

David Puner: [00:12:00] A little bit of a disconnect there.

Aaron Painter: That’s right. So much of that, to me, is the concern. We have gone to this lowest common denominator of low friction, allowing people to identify themselves in many ways. The issue with that is it’s very difficult to build trust. The classic context for me is a dating site. You’re going to go on a dating site, and you’re often going to log in with a social media login or just an email. If things go really well, you’re going to build trust with another person to the point where you’re going to meet in person. Often, that’s a very high-trust, meaningful moment. You see people, in practical terms, say, “Oh, I know we’re about to meet up in person, but my, whatever, family member suggested, ‘Hey, can you just send me a copy of your ID or send me a text here just to make sure things are safe.'” And you can’t actually trust the platforms often. Part of it is because the core identity being used in those platforms is incredibly primitive, and it’s not really proof that you know who you’re interacting with.

David Puner: [00:13:00] So then, that leads to not knowing who you’re dealing with—you could be dealing with somebody entirely unreal. Deepfake—how do AI-generated deepfakes play into the overall identity challenge, and how are they being used in social engineering attacks?

Aaron Painter: I think we’re really seeing the rise of deepfakes in the last couple of years, particularly the last several months, coinciding with the rise of generative AI. Part of it is that we have equipped all individuals, including bad actors, with an incredibly new superpower tool to impersonate someone else. We’re seeing deepfakes pop up, obviously, in media clips: someone says, “Here’s Elon Musk promoting some new product that you should go buy or make this investment decision because Elon thinks it’s a good idea.” Or, “Here’s a political figure who might have said something because here’s a video that sort of proves it.” We’re seeing those appear in a lot of contexts. What we’re also seeing, though, is that concept of “I’m locked out of my account. Please reset my access.” That is a moment of impersonation, and we’re seeing bad actors use these new tools to create deepfakes to overcome the challenge of proving to the help desk agent that they are the person whose account they’re trying to access. We see it in voice too. In new research, Microsoft has been very interested in this. They say within three seconds of someone’s voice, you can extract from a video, a podcast, or a social media post and create a deepfake of their voice. Now, will it be perfect? Will it be high-fidelity audio that some of our most elite podcast producers might think is okay? Maybe not, but that might still be enough to cause damage or to trick a help desk agent into thinking, “Oh, I’m on a bad phone line. That’s why I’m having trouble with my access,” etc. It sounds plausible. Voice fakes, deep video, and photo and image deepfakes have become just sort of a reality that we’re seeing pop up everywhere in this spirit of impersonation.

David Puner: [00:15:00] It seems that most of the folks I’ve talked to seem to think that they can readily spot a deepfake. Up until recently, I was probably one of those people who felt that way, but they’ve definitely gotten a lot more sophisticated in the last year or so. How much more sophisticated have they gotten, that you’ve seen, and how are you keeping up with their rapid evolution?

Aaron Painter: I think the rise of deepfake technology is only going to get better. If you look at the pace of the generative AI models and how quickly that’s developing—less than year-long release cycles of new builds of models that cost enormous sums of money—the technology is only going to improve. So, if you think they’re pretty good today, okay, great. Fast forward a short period of time, and your concerns should be higher. We’ve seen them get extremely good. In fact, we’ve seen people create deepfakes or even just digitally manipulated images—let’s say, of a government ID, a passport, or a driver’s license—that are almost indistinguishable. Actually, the issue is more often that states themselves, for example, in the U.S., don’t often implement their own standards consistently. So, in some cases, the fakes are better than the real thing because they purely adhere to what the state guidance or template was when they created it, and oftentimes there are inconsistencies. So, it’s wildly advanced to see how good those have gotten. But I think the conversation is often in the wrong place. The conversation is often in deepfake detection—asking, “Is this a deepfake? Can I detect it as a human? Can other AI or detection software detect the deepfake?” I think that’s wrong because I think that’s an arms race. You will always have good actors creating detection software and bad actors creating deepfakes. You never know who’s going to win, and if I had to bet, I’d probably bet on the bad actors being slightly ahead of time. To me, the focus is somewhere different.

David Puner: [00:17:00] So then, if you’re saying it’s not necessarily about spotting the deepfakes or being able to spot them automatically, what is the solution?

Aaron Painter: I think it has to be about prevention. We have to think more holistically about how we are verifying who a person really is and getting proof that they are who they say they are. To me, part of that is architectural. If you purely do AI versus AI, again, I think bad actors are going to win. If you are able to take other elements of technology— for example, in our world, we think a lot about cryptography, the power of cryptography, which we’ve been building for decades now in computer science and it’s wildly advanced. If you can take cryptography, biometrics—another really useful field of computer science where the hardware’s gotten wildly advanced, the software and the models have gotten wildly advanced—and pair cryptography with biometrics and AI, you have a much stronger arsenal to prevent deepfakes from being deployed. We think a lot about how to prevent them from being used, as opposed to merely detecting that someone might be a deepfake or trying to use one.

David Puner: [00:18:00] Other than deepfakes, what are some other common types of AI fraud that you’re focused on?

Aaron Painter: I think we’re seeing AI fraud get more advanced in various categories, particularly in things like phishing emails and web pages—recreating web pages of a company, landing page environments so that the language and the little details of the graphical landing page you might get or the email you might receive—the ability to impersonate a company graphically or their intent has just also wildly skyrocketed. But typically, those are the entry points to get someone to then click and do some sort of credential identity verification component. For example, we’ve seen a rise in fake websites that go out and say, “Hey, dear person, we are hiring at our company. Fake landing page for the company, please fill out this job application. Wow, you seem great. We’d love to hire you. Let’s get you on-boarded. Click here and update your banking information to get going so we can pay you.” Obviously, that’s not what’s going to happen—obviously defaming the company’s name, obviously not valid and accurate. But all of that comes back to me in the same element. Anything we see online, I feel like we need to know who posted it. I feel there is a responsibility of platforms to verify their users to create safe communities. Even if the users operate with pseudonyms or aliases, I think the platforms have an obligation to verify the users of those platforms so that we can trust the content that we’re seeing online.

David Puner: [00:19:00] What other emerging threats and new attack methods are you and your team particularly concerned about and focused on now?

Aaron Painter: We’re really focused on identity-related attacks for people’s accounts. By many measures today, over 90 percent of cyberthreats are identity-related. So back to your point—the theme of the podcast that comes up a lot here is this concept of trust. The reality is that it’s simply too hard to verify the human behind the screen, whether that’s on mobile, on the phone call, or logging into a website. We have been slow as an industry in creating technology that allows people to verify that that human is who they say they are, and frankly, to do it in a high-assurance way that’s low friction and ideally re-verifiable so you can use it on an ongoing basis. I think we’ve been slow in creating those technologies, and it’s created this gap that bad actors are now exploiting from various channels.

David Puner: [00:20:00] Really interesting. CyberArk’s 2024 Identity Security Threat Landscape Report found that an overwhelming majority—93 percent—of organizations have experienced two or more breaches due to identity-related attacks, which is just crazy. These organizations anticipate the total number of identities to increase more than 2.4 times in the next 12 months. So, this is not a situation that’s calming down at the moment.

Aaron Painter: I think you’re right. Actually, I think, in some cases, we’re in an epidemic of attacks at the help desk—of social engineering attacks and people taking over accounts. I actually don’t think we’re at a peak. This is a wild, wild outbreak, and the number of companies and their public disclosures on what’s happening, and private companies—the volume that we’re seeing—this is out of control right now. I worry about the safety of our digital infrastructure.

David Puner: [00:21:00] Your focus, it seems—and tell me if I’m wrong—is on human identities, not so much machine identities, which, of course, are a big and exploding part of this ecosphere system—whatever eco thing you want to call it. How do machine identities factor into what you do, or do they not?

Aaron Painter: Yeah, to your point, we really try to focus on human identities. Part of it is that we feel we’re not very good at that, and yet it’s the most critical for society to get good at because if we can’t protect humans and humans’ accounts, then it’d be great to protect non-human accounts. But, oh my goodness, are we going to have an enormous crisis across civilized society today.

David Puner: [00:22:00] Okay, tell me if I’m wrong here—do you think—I don’t want to put words in your mouth, but do you feel like human identities are more difficult to protect than machine identities?

Aaron Painter: Yes, because it’s often machines that are trying to impersonate humans or humans that are using machines to impersonate humans. I think the technology exists very much to protect human identities, but I think it is probably a harder technical challenge than non-human. Most of the technologies I see in the non-human space are in recognizing who they are, but detection alone is sufficient. Going to the level of verifiable proof is maybe less critical than what we really need in the human space.

David Puner: [00:23:00] How does trust factor into all of this? I mean, you’ve already mentioned it here, there, and everywhere, but zero trust, of course, is “never trust, always verify.” Do you think it’s easier or more difficult now than, say, a year ago to diligently practice zero trust?

Aaron Painter: I think it’s all the more critical that we do, but I feel that your first part of the question—it’s very difficult to build a relationship without trust. In fact, in 2017, I wrote a book on the concept of loyalty and how you can create loyalty with employees and with customers in your organization. Frankly, by listening—listening is a way to show respect as a path to building trust and having a successful relationship. So, I’m a very passionate believer in this space, and trust, to me, is foundational in being able to build a relationship. A relationship is what a company wants with their employees. It’s certainly a relationship that a company—a healthy relationship—is something companies want with their customers. It’s just core to how the world works, and it’s core to how business works. If we can’t trust who we’re interacting with, we will have a difficult time building the types of relationships necessary for success.

David Puner: [00:24:00] Are there any common misconceptions around identity you’re hearing these days, whether it be from customers, prospects, or just in general?

Aaron Painter: I think there’s a lot of confusion around “good enough.” Well, no, I have the person do this—isn’t that good enough? What we’re seeing is that it’s not good enough if you’re trying to prevent bad actors. “Good enough” can be things like “I’m sending someone a text message with a pin code—is that sufficient?” Well, no. It turns out that it might be low friction, but somebody can call the mobile telco and ask to switch over the phone number because they got a new phone, and it’s just as hard for the telco to know who they’re really speaking with on the phone before doing what’s called a SIM swap. Or, “Oh no, I’m doing video calls with my employees when they’re locked out and need to reset access.” There are benefits to that, for sure. It’s incredibly time-consuming, it’s incredibly expensive, and, as we’ve seen in this world of deepfakes, the video calling platforms that we’re so used to and that we trust today really weren’t designed to prevent deepfakes from being used. As easy as it is to select a different camera or microphone, you can select a piece of software that’s creating a deepfake for you in real-time. So, the concept that “I’m doing good enough—the techniques I have today will work”—I think what we’re seeing is that they’re sadly still quite primitive. We’re seeing the bad actors take advantage of that.

David Puner: [00:25:00] So, when you’re talking to organizations at different stages in this whole game, what are the most common three things that organizations should know, regardless of where they are in their awareness of identity and being on top of it?

Aaron Painter: I think the first is that your MFA process you might have put in place for your employees or even your customer-facing accounts is only as secure as the reset or recovery process behind it. I went out recently looking for new banks, and one of them—I said, “I’m very interested in security, obviously,” and asked, “How do you help protect the accounts?” One of them said, “Oh, we give you YubiKeys as a bank.” I said, “Oh my gosh, that’s great.” Then I asked, “What happens if I get locked out?” They said, “Oh, don’t worry. We just send you a text message.”

David Puner: [00:26:00] Oh.

Aaron Painter: So, what’s the point of the YubiKey? Because I can just claim that I lost it and I’ll get my text message. It’s sort of theatrics. The concept that you’re really only as strong as your reset or recovery process. Two, this is a reality today in every MFA platform out there. So, you are not alone in trying to think about a solution, but you need to recognize that it’s a problem. It’s simply a gap in the technology from those providers, and there are solutions to fill it, but you need to find a way to be able to help people think about the reset and recovery process in addition to whatever MFA technology you have. Third, and usually a suggestion, I would say, go do a penetration test of your helpdesk. Penetration tests are common in the world of security as a way to test the infrastructure that you built, but you can also do it in the context of social engineering. You can hire people—we speak with a lot of white-hat hackers who go and challenge IT helpdesks, so you can probably do it yourself. Call your helpdesk, pretend to be an employee, pretend to be a customer, and just claim that you’re locked out and see how good your team is at dealing with that. If you want to add to it, try using a deepfake in the process and see how capable your team is at responding and preventing that attack. It will give you a sense of, “Hey, are we vulnerable as an organization or not?” This is something you really can’t ignore. This is the cyberattack factor of the moment. Frankly, as a user, as a person in the industry, we need companies and individuals to take more proactive actions here to increase the security of our digital infrastructure across all industries.

David Puner: [00:27:00] And, of course, that’s keeping in mind what exists out there right now in the world of attack methods and cyberthreats and all that kind of stuff. Looking ahead, we always try to break out the crystal ball here a little bit, and nobody exactly has had one just yet, but what new threats or risks do you foresee organizations facing in the next few years, and how should they prepare for these challenges now, or how can they prepare for them?

Aaron Painter: Sadly, one of the biggest areas where we’re seeing a significant rise in attacks is in healthcare. It’s corresponding with a lot of government warnings—going back really to November—pretty aggressively, almost on a monthly basis from different government regulatory agencies, particularly in the U.S., warning about targeted attacks at healthcare organizations, particularly through social networking means. I worry that our healthcare data—actually access to our healthcare infrastructure and the machines and the technology that powers it—I worry they’re going to become like the last four of people’s social security numbers and answers to these security questions that are now just commonly available out there for bad actors. The prices are higher in dark web and bad actor trading forums. People are paying more for healthcare data. I’m worried that the next wave of real data exposure we’re all going to experience is going to be related to healthcare information. That’s sort of one of my worst fears right now.

David Puner: [00:29:00] That is definitely something to be fearful of. Aaron, thank you so much for your time. I know we’re probably coming up almost on lunchtime there in Seattle. If there was just one last word you wanted to have here, what would it be? Go.

Aaron Painter: No, I think we’ve covered a lot of topics. It’s important to educate yourself on deepfakes today—to be aware that they are a real thing and they are causing real impact. You can debate whether, “Oh, I don’t know how many deepfake attacks I’m going to get on my helpdesk,” or not. It doesn’t matter. If you get one, it can cause significant account, financial, reputational damage, or otherwise. It’s something people are using. So, educate yourself on deepfakes. Please, please think about how you’re protecting the accounts in your organization. It’s a fundamental security risk, and there are solutions to reduce that risk, but it needs proactive effort.

David Puner: [00:30:00] Aaron Painter, CEO of NameTag. Thank you so much for coming on to Trust Issues.

Aaron Painter: It was a pleasure. I appreciate it.

David Puner: Thanks for listening to Trust Issues. If you like this episode, please check out our back catalog for more conversations with cyberdefenders and protectors. And don’t miss new episodes. Make sure you’re following us wherever you get your podcasts. And let’s see—oh, oh yeah—drop us a line if you feel so inclined. Questions, comments, suggestions—which come to think of it are kind of like comments. Our email address is trustissues, all one word, at cyberark.com. See you next time.