Many years ago, the philosopher Phaedrus said, “Things are not always what they seem; the first appearance deceives many; the intelligence of a few perceives what has been carefully hidden.”
He couldn’t have possibly imagined today’s world, yet his warning encapsulates deepfakes, one of the greatest threats of modern times.
As AI advances, digital disinformation is blurring the lines between fact and fiction. We witnessed this numerous times throughout 2024, particularly during critical election cycles around the world.
Threat actors are embracing deepfake tactics in droves as AI-based tools become cheaper and more accessible. This recently prompted the FBI to issue a warning that criminals are “exploiting GenAI to commit fraud on a larger scale, which increases the believability of their schemes.” These deepfake attacks are also becoming more brazen. For instance, this September, threat actors held a videoconference with a U.S. senator by posing as a Ukrainian official. Fortunately, the deepfake was quickly outed before damage was done, but the incident underscores how convincing and potentially damaging such operations can be.
Deepfakes are increasingly infiltrating companies and putting them at risk. As we enter 2025, heightened vigilance is critical.
Three Enterprise Consequences of Deepfakes
For businesses, deepfakes pose significant risks in the form of financial loss, reputational damage and corporate espionage.
1. Financial Loss
Last year, the team at CyberArk Labs used a clip of my voice from a podcast to create a pretty convincing AI-generated deepfake to demonstrate a voice phishing, or vishing, spoof. It took them less than five minutes to do this. Fast-forward to today, and it’s even easier for threat actors to create highly convincing audio, video and image content. Free or inexpensive and readily available GenAI tools can mimic voice patterns and facial expressions with astounding accuracy, making it increasingly difficult to separate truth from fiction.
Using such techniques, threat actors can disguise themselves as high-profile company executives to dupe employees into fraudulent activity, such as transferring funds. In one recent high-profile case, cyberattackers made off with $25 million after a synthetic video call deceived an employee. This is an extreme example, but these incidents happen more often than you’d think. Fifty-three percent of businesses in the U.S. and U.K. have been targets of deepfake-powered financial scams in the last year, according to a study by accounts payable software firm Medius. Forty-three percent of these organizations have fallen victim to these attacks.
Deepfake attacks can impact corporate financials in other ways. For instance, last year, a deepfake image of smoke rising from a corporate building created a panic that rattled markets and set off a stock market sell-off.
2. Diminished Trust
Some deepfake attacks go deeper than financial fraud. For example, a manufactured speech or media interview impersonating a corporate executive could contain inflammatory remarks or false information, damaging hard-earned reputations and undermining trust.
In May, the CEO of the world’s largest advertising group was the target of an elaborate deepfake scam involving an AI voice clone. The threat actors reportedly created a WhatsApp account belonging to the CEO using a publicly available photo, and used it to set up video meetings with other company executives. Fortunately, the scam was detected quickly, but had it not, the fraudsters could have easily spread disinformation and sparked distrust throughout the global organization.
In another example, a Baltimore-area school principal’s career was nearly destroyed when a fake AI-manipulated audio clip emerged of him making derogatory comments. One version of the clip accumulated almost two million views within hours of it being published, the BBC reported. He began receiving abusive messages and even death threats before it was determined that the video was completely fake, created by a disgruntled employee. As this story shows, disinformation spreads rapidly. In fact, an MIT study found that falsehoods spread “farther, faster, deeper and more broadly than the truth.” False news reports are 70% more likely to be retweeted than accurate news stories, and they reach the first 1,500 people six times faster.
3. Corporate Espionage
The HR department is especially vulnerable to deepfake attacks because it has access to extensive personal and corporate data. HR also offers a potential doorway—quite literally—into the organization for threat actors seeking sensitive information and company secrets.
This summer, reports of a mind-boggling deepfake emerged. The security awareness training company KnowBe4 conducted four separate video interviews with a job applicant who matched the photo on their application. Background checks and other pre-hiring steps were completed without incident, but the applicant used a stolen, AI-enhanced photo. After onboarding, the new hire deliberately installed malware on their company-issued device, which the organization quickly detected.
Upon sharing the suspicious activity with authorities, it was confirmed that the hire was a North Korean operative. Fortunately, the company didn’t suffer any damage or loss from the incident and, in an admirable move, turned their experience into a powerful learning opportunity for everyone.
How to Deepen Deepfake Defense
Since there’s no foolproof way to catch a deepfake, organizations must take a multi-faceted defense approach. I encourage you to check out these proactive deepfake prevention strategies in addition to the following best practices.
- Think critically and stay alert. Any employee could be a target for an opportunistic attacker, whether they’re using deepfakes or other methods. Employee awareness training will help individuals recognize social engineering tactics, including deepfakes and telltale signs of manipulation. It’s imperative that everyone in the company stays alert, thinks critically and is skeptical of any unexpected or overly urgent requests. When it comes to any questionable content, the motto must be “never trust, always verify.”
- Monitor continuously and guard sensitive access. The potential consequences can be catastrophic if aspects of an employee’s digital identity are stolen or faked. Continuously monitoring for suspicious identity-related threats and guarding sensitive access are critical steps for mitigating overall risk exposure.
- Verify content. Digital forensic tools can analyze videos and identify signs of manipulation, helping to validate their authenticity. Source validation and cross-referencing from trusted sources can also be beneficial in ensuring the validity of the content.
- Stay transparent. The KnowBe4 incident highlighted the importance of proactive information-sharing. The more we share, the more we’ll know and the better prepared we’ll be to combat disinformation campaigns that come our way.
Many things are no longer what they seem. Deepfakes represent a formidable threat to the enterprise that will continue to grow in the year ahead. It’s up to all of us to look deeper, perceiving what has been carefully hidden in the relentless pursuit of the truth.
Omer Grossman is the global chief information officer at CyberArk. You can check out more content from Omer on CyberArk’s Security Matters | CIO Connections page.
.