As a former black hat hacker, social engineering and phishing concepts are not new to me. I have used these techniques in my previous life, so I know their effectiveness. Having spent years immersed in the intricacies of social engineering, I’m always looking for new twists on this age-old technique. To stay current these days, which means getting into the minds of attackers rather than by practical application, you can find me at places like the Social Engineering Village at the DEF CON Hacking Conference in Las Vegas.
According to the CyberArk 2024 Threat Landscape Report, nine out of 10 organizations have been victims of a successful identity-related breach due to a phishing or vishing attack. This staggering figure indicates that social engineering is not going away and will likely worsen. This attack vector is particularly vulnerable due to the human element; often, no software controls are in place, leaving safety to the under-duress choices of the individual under attack.
Today, as a security evangelist, I don’t get the luxury of discovering new techniques by reading about them on blogs. I’m the one that people are looking to be writing those blogs. Recently, I was shocked when a non-technical friend shared a terrifying experience. Imagine waking up in the middle of the night to the voice of a loved one claiming they were kidnapped and needed money for their safe release. That’s the scenario my friend faced.
A Personal Encounter with Modern Fraud
My friend was woken up at 4 a.m. by a call—on the other end of the line, her niece was crying and screaming in terror, claiming she had been kidnapped. A male voice then made threats of sexual assault and human trafficking, warning that these things would happen if the call was disconnected.
Frozen with fear, my friend had the presence of mind to cover the microphone with her hand and had her husband call her sister to check on her niece. When they confirmed the niece was safe in bed, my friend hung up on the caller.
As a former black hat, I recognized the fraud. The attacker created a situation where verifying the information was difficult, making it hard to combat the ongoing attack.
The Mechanics Behind Deepfake Fraud
One of the newest deepfake frauds combines past successes with modern innovation. Bad actors scour social media for videos with potential victims’ audio. With just three seconds of audio, a deepfake rendering application can create and process a convincing imitation of a person’s voice. The call is usually made late at night or early in the morning, with the loved one claiming to need money for various dire reasons, such as being kidnapped or needing bail.
The evolution of this attack vector has been rapid due to the commercial availability of deepfake apps. Deepfake fraud is a global problem, with each country trying to find effective controls. In the U.S., the Federal Communications Commission (FCC) has released a consumer guide on this issue.
Deepfake Technologies: Old Tricks, New Tools
Deepfake technology isn’t new but has mostly targeted corporations. In 2019, the CEO of a UK-based energy firm fell victim to a deepfake audio fraud, transferring €220,000 to a fraudulent account. Attackers used AI to mimic the chief executive’s voice, making the scam nearly indistinguishable from reality.
A similar incident in Hong Kong involved a bank worker who authorized a significant financial transfer after a deepfake video call convincingly impersonated an executive. These cases show how bad actors weaponize AI-driven tools to deceive individuals and organizations.
Uncovering the Truth: How to Identify Fraud
As I mentioned, there are often no software controls for phishing or vishing, and security lies with the individual. Living the fundamentals of identity security as a life practice can provide protection beyond the digital realm.
Bad actors will continue exploiting fear and emotions to deceive and extort victims. These tactics create urgency and pressure, targeting the victim’s sense of responsibility or attachment to loved ones. This strategy preys on the instinct to act quickly in emergencies, leaving little time to assess the situation critically.
In such moments, if the voice on the phone sounds like a loved one, it’s not the time to determine whether it’s a fraud. Hesitation in an actual emergency could have severe consequences. However, responding without verification also carries significant risks. Pre-established verification methods, such as a shared password or passphrase, can provide clarity and protection in high-stakes moments. Striking a balance between immediate action and caution is crucial, especially when two out of three people can’t distinguish authentic from cloned voices.
Practice Zero Trust—never trust, always verify.
Adopting a Security Mindset
Security is a state of mind, not a state of being. Every advancement in communications has fallen victim to abuse, and this trend shows no signs of stopping. Living an identity security-conscious life goes beyond the digital; it’s all-encompassing. The overlap of digital security principles with the physical world expands the protections we already know and trust.
Len Noe is CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman. His first book, “Human Hacked: My Life and Lessons as the World’s First Augmented Ethical Hacker,” was published in 2024.