Cybersecurity has been my world for years. I’ve worked hard to build my reputation, moving from the hacker underground to a trusted boardroom expert. I take my role seriously because trust and credibility define success in this field. But I recently discovered that even with all my experience, I wasn’t immune to one of today’s fastest-growing threats—deepfake technology.
Famous Faces and Everyday Victims of Deepfakes
What do Taylor Swift, Piers Morgan, I, and countless others share? The answer: We’ve all been victims of deepfake technology—along with millions of others. That’s right, I, Len Noe—transhuman, podcaster, author, futurist and Whitehat—discovered my very own first deepfake (that I know of). The shock and feelings of being violated are hard to express.
While conducting my regular media scan, I came across a video of myself—or so I thought. I immediately noticed something was off.
The preview showed me talking and moving my head, but that was impossible. And the outfit in the video? I had only worn it once for a bio photo—a still image, not a video.
I wasn’t happy, but it could have been worse. While I have some recognition in the cybersecurity community, I never imagined someone would exploit me this way. I had fallen into the same trap I warn others about constantly—I failed to see how my reputation as a security expert and my public speaking could be harnessed through deepfake technology and exploited to promote services and products—without my consent.
Deepfake Technology: A Low-Tech Threat with High Impact
One of my favorite scenes in The Matrix is when Neo uploads martial arts fighting styles into his brain. I had no idea that similar capabilities existed until I saw myself speaking fluent French. The problem? I don’t speak French.
Watching a version of myself speak a language I didn’t know in a voice that wasn’t mine was surreal. I won’t pretend I was instantly outraged—I was stunned. I watched it three or four times before the shock wore off. My first real thought? This is so low-tech. Then came the internal dialogue: Nobody will believe this! But what if they don’t know me? What if they do think it’s me?
Since I don’t speak French, I had no idea what was being said. From the images, I assumed it referenced me being a transhuman, but I had to wait for a French-speaking coworker to translate. Those minutes felt like hours.
It turns out that a company selling cybersecurity services used my likeness without my permission. They didn’t even respect me enough not to put words in my mouth. The cybersecurity community isn’t huge—how did they think this wouldn’t get back to me? Fortunately, while distasteful, the video didn’t contain harmful statements or require damage control. But it got me thinking about people who aren’t so lucky. Deepfakes are used for cyberbullying, sexploitation and industrial espionage (just to name a few malevolent things)—the result is always the same: pain, humiliation and, ultimately, loss.
The Dangers of Deepfakes: Beyond Just a Fake Video
Celebrities and public figures have copyrights, patents and trademarks to protect their brands. But for the rest of us? We’re on our own. This opens the door for individual brand insurance—because I’m living proof that deepfakes can happen to anyone. You don’t have to be famous or powerful to be a target.
I was lucky. But what if I hadn’t been? The video mentioned my employment at CyberArk. If it looked real, I’d have to explain why I appeared to be violating my employment agreement. Worst case? The deepfake could damage my reputation—and, by association, CyberArk’s.
This fake version of me could have said anything: making derogatory remarks about vendors, governments or individuals. What if it had gone beyond marketing? What if it made false claims about security controls? Someone might believe it. Trust in experts takes years to build and seconds to destroy.
Employers must now consider how their public-facing teams have brand-new identity vulnerabilities that didn’t exist until now. Just as phishing campaigns have empowered security around emails, a response plan for deepfake should be prepared so there is a process to mitigate these attacks moving forward.
How to Protect Yourself from Deepfake Attacks
After the initial shock, the next step is damage control. If you’re the victim of a deepfake, here’s what you can do:
1. Remember, this isn’t your fault. You are the victim; you didn’t ask for this to happen. It’s not going away, and your options are to deal with it or let it deal with you.
2. Determine the legal implications. Some deepfakes are unethical but not illegal. Taking someone’s photo in public without consent isn’t against the law, which extends to deepfake manipulation. Once someone posts an image or video to a public place, these files become part of the public domain and lose legal protections.
3. Know when to involve authorities. Laws pertaining to deepfake crimes vary between countries. My advice is to research local regulations. From a U.S. perspective, deepfakes cross into criminal activity if they involve:
- Fraud or identity theft (e.g., financial scams, fake job interviews)
- Defamation or harassment (e.g., cyberstalking, blackmail, doxxing)
- Non-consensual content (e.g., revenge porn, deepfake porn)
- Threats or incitement (e.g., impersonating officials, spreading false emergency alerts)
- Political or election manipulation (e.g., deceiving voters, impersonating candidates)
- Impersonating law enforcement or government officials (e., impersonating an officer or other official)
Beyond the instances specified above, lack of judgment is not criminal, regardless of the emotions attached. Bad actors have weaponized social media, and we are supplying the ammunition. The best way to prevent being victimized is not to allow public access to any photos, videos or audio moving forward. Start by making sure social media posts are only seen by your contact list and not the general public. Next, review your friends lists and remove any non-close contacts. You know this, but I’ll say it anyway: not everyone on social media is your “friend.”
Unfortunately, I don’t see this as a viable solution.
4. Report it, even if it’s not criminal. Contact the hosting site and follow its content removal process. From my experience, this process is not fast or reliable, but it’s the last option for a deepfake victim.
Take Control of Your Narrative
In addressing my situation, I had to follow many of the paths I described above. First, I reported the post to the provider through the usual channels. Because CyberArk was also mentioned, I informed our legal team, and they reported the post and drafted legal paperwork to be served. This was the extent of what I could do.
Sometimes, the best defense is telling your story—just like I’m doing now. Addressing the deepfake publicly can shut down questions before they start.
Once reported, your best option may be legal action. A cease-and-desist letter could help, but realistically, deepfake takedowns remain a gray area.
Moving Forward: Staying Vigilant in a Deepfake World
This was a wake-up call. I hope my experience helps others prepare for a world where deepfakes target everyone, not just the rich and famous.
This is our new reality. One day, you might watch your deepfake say something shocking. Like me, you may not see it coming. I undervalued my worth, as many do. I forgot we all have value to someone—even if we’re not famous.
By staying security conscious, I found the deepfake quickly and took action. I didn’t know how to react at first, but now I do. The future is uncertain, but we can manage the risks if we remain vigilant and prioritize identity security.
Len Noe is a technical evangelist at CyberArk. His first book, “Human Hacked: My Life and Lessons as the World’s First Augmented Ethical Hacker,” was published in 2024.