March 28, 2025

EP 4 – AI-Powered Fraud: Redefining the Identity Threat Landscape

Imagine receiving an urgent email from your bank that looks perfectly legitimate. It warns you of a suspicious transaction and prompts you to verify your identity. You hesitate but click, and suddenly, your credentials are compromised. This scenario, crafted by AI-powered fraud-as-a-service, is happening now.

In this episode of the Security Matters podcast, host David Puner is joined by Blair Cohen, Founder and President of AuthenticID, to discuss the evolving identity threat landscape. They explore the rise of synthetic fraud, the role of biometric authentication and how AI-driven security is reshaping the fight against cybercrime. Blair shares insights on the challenges of detecting deepfakes, the advancements in biometric authentication and the impact of generative AI on security measures.

Tune in to learn how security leaders can stay ahead in this rapidly changing environment and what organizations can do to prepare for the next generation of cyberthreats.

David Puner: You are listening to the Security Matters podcast. I’m David Puner, a senior editorial manager at CyberArk, the global leader in identity security.
Imagine this — you get an urgent email from your bank. It looks like the real deal. Perfect grammar, flawless branding, seemingly legit sender address, no blatant red flags. It warns of a suspicious transaction and asks you to verify your identity immediately. You hesitate, but the email is convincing. You click. At that moment, a breach begins.

Your credentials are compromised. Your data, exposed. But here’s the twist — this wasn’t crafted by a human. It was built by AI-powered fraud-as-a-service capable of generating perfect phishing campaigns, realistic deepfake voices, and even synthetic identities.

This scenario isn’t some distant future. It’s happening now. And as cybercriminals refine their techniques, even seasoned experts struggle to separate real from fake.

So — how do security leaders stay ahead?

Today, we’re joined by Blair Cohen, founder and president of Authentic ID, to break down the evolving identity threat landscape, why synthetic fraud is skyrocketing, and how biometric authentication and AI-driven security are reshaping the fight against cybercrime.
Let’s dive in.

David Puner: Blair Cohen, founder and president of Authentic ID — welcome to Security Matters. Thanks so much for coming onto the podcast.

Blair Cohen: Thank you so much for having me here today, David. Glad to be here.

David Puner: Really excited to have you. Really excited to talk to you today.
To start things off — when did you first start thinking about identity, and how did you wind up founding Authentic ID? What inspired you to focus on identity verification and fraud prevention?

Blair Cohen: It’s actually a pretty interesting story here. I’m a serial entrepreneur. This is my third or fourth company that I’ve created as an adult. The previous one was a background checking company — a background screening company.
When you do a background check, part of the process is performing identity verification. Historically, identity verifications were done by asserting your Social Security number. I’d take your SSN, run it through one of the credit bureaus, and see if your name was associated with that SSN. If it was, then your identity was considered verified.

David Puner: Mm-hmm.

Blair Cohen: But in the early 2000s, I started seeing something strange. When we’d run an SSN, not just one name would come back. Sometimes we’d see 20, 30, even 50 different names associated with it — and it kept getting worse.
By 2006, this was a pretty regular experience. That told me something was very wrong with the way we were proving identity. It just didn’t make sense anymore. So many people had access to your Social — it wasn’t a private identifier anymore.
That was the impetus for creating Authentic ID. I realized in the mid-2000s that we needed a much better way to prove identity — one that couldn’t be based on data. It can’t be based on what you know — like your car’s make and model or your pet’s name. We needed something that was much harder for fraudsters to obtain.

Blair Cohen: What’s the gold standard for proving identity across the globe? Government-issued IDs. Anywhere in the world — if you need to open a bank account, you show a government-issued ID.
What else can’t fraudsters really steal? Your face. People take pretty good care of their identity documents, and your face is uniquely yours.
So instead of using data, we now use what you have — your government-issued ID — and what you are — your biometric identity. That’s what led to creating Authentic ID.

David Puner: You had mentioned at the beginning of that answer that you had founded a number of companies as an adult. What companies did you found as a pre-adult?

Blair Cohen: Oh, just small businesses — snow shoveling and lawn care and things like that. Yeah, just small businesses.

David Puner: Life skills.

Blair Cohen: Life skills, yes.

David Puner: Good ethic.
What do you see as the most critical identity security concerns facing organizations today, and how has the evolving threat landscape over the last year or so reshaped or solidified these threats?

Blair Cohen: Gosh, to nail down just the single biggest problem that exists today would be hard. We’re moving very quickly to biometric authentication. And there are plenty of flaws out there. There are bad algorithms, exploitable vulnerabilities, and ways to inject images into workflows — all of which exist in many organizations’ workflows today.
In most cases, people are manually reviewing transaction or account-opening data — information that’s been asserted. With GenAI and the quality of audio and visual signals now possible, humans simply can’t detect this stuff.
You need to deploy very sophisticated liveness detection and biometric matching algorithms to protect yourself from being defeated.

David Puner: I should point out that we’re, of course, talking about human identity here — a big piece of the puzzle — but we don’t want to confuse it with machine identity.
You mentioned generative AI. How has GenAI transformed authentication challenges and authentication itself?

Blair Cohen: It’s enabled the creation of sophisticated identity documents and entirely fake but realistic faces. You can generate a completely new person — someone who never existed — just by asking the AI to do it.
Today, you can synthesize a voice from as little as three seconds of audio.
So GenAI has changed the game. Humans can no longer be relied upon to look at an image or listen to a voice and say whether it’s authentic. That’s over.
You now need tools — sophisticated detection capabilities and algorithms. Manual review just doesn’t cut it anymore.

David Puner: Have you ever been fooled by a deepfake?

Blair Cohen: Yeah. It’s sad to say — I’m considered one of the world’s experts at this, and I just took a test yesterday. They showed me 10 images of deepfake faces. I got five out of ten.

David Puner: Wow.

Blair Cohen: It’s kind of what I expected. In a real-world environment, people are only given a short period of time to examine an image. And deciding whether that image is authentic, deepfake, synthetic, or something else is really hard.
Let’s say you’ve got 10 seconds. That’s not enough time to catch the subtle GenAI-created details.
The deformities in ears or eyes that we used to spot? Those are gone now. And the tools are accessible. You no longer have to be a tech expert with Photoshop skills. You just chat with the model, tell it what you want, and it does it — often better than a human — in seconds.

David Puner: You mentioned “synthetic.” So, what is synthetic identity, and what are the authentication difficulties associated with it — beyond what we’ve already discussed?

Blair Cohen: Synthetics are a tremendous problem — and they’re not new. Synthetic identities have been around for a long time.
It’s kind of hard to understand why they even exist, but it really has a lot to do with the Fair Credit Reporting Act — the FCRA. If someone submits a “new” identity, the credit bureaus often create a brand-new file.
So if you slightly modify your name — say, David L. Puner instead of David M. Puner — and change one digit in your Social Security number, the bureaus may create an entirely new persona.
They’ll stitch together pieces of your identity and someone else’s, creating a whole new synthetic file.

Blair Cohen: Generally, the synthetic won’t get credit immediately. But the fraudster will build it up over time. They might start with a prepaid card, slowly build credit, and then — all at once — perform a “bust out.”
That’s when they apply for four or five credit cards, get approved, max them all out, and disappear.
And since the person never existed in the first place, there’s no one to go after. That’s the problem. It’s a fake person — a synthetic — and they’re gone.

David Puner: Is this something that threat actors can do at scale? Or is it more of a one-off tactic?

Blair Cohen: No — it’s absolutely scalable. They can maintain and grow synthetic identities over time.
There are websites out there — I won’t name them here — where you can generate an entirely new identity with one click. Name, address, geolocation, email, phone — everything.
Click a button, get a new identity. Click again — another one.
So yes, they can absolutely do it at scale, and they do.

David Puner: Back to this big piece of the puzzle — what are the latest advancements in biometric authentication, and how do they impact security?

Blair Cohen: From a biometric standpoint, all the top algorithms perform quite well today.
There used to be criticism that biometrics didn’t work equally across demographics — skin tone, gender, age — but that’s no longer true.
With facial recognition, the top 20 algorithms now perform equally well across all demographics.

Blair Cohen: The bigger challenge today is spoofing. Accuracy isn’t the issue — stopping spoofing is.
There’s plenty of video of me out there. It wouldn’t be hard for someone to grab a video, still image, or even create a mask of my face.
If they pair that with a fake ID and attempt to authenticate, the match might succeed.

Blair Cohen: That’s where liveness detection becomes critical. We need to detect whether we’re being spoofed — whether it’s a real person in front of the camera, not a static image or pre-recorded video.
We use “presentation attack detection” for that.
Level 1 and Level 2 certifications exist, and Level 2 is robust enough to stop most spoof attempts.

Blair Cohen: But fraudsters are smart. They’ve figured out that we rely on device sensors — like the ones in your phone or computer — to catch those presentation attacks.
So now, they bypass the camera altogether by installing virtual cameras. These virtual cameras feed synthetic images directly into the system, bypassing the sensor data we rely on.

Blair Cohen: When they inject these signals through a virtual camera, we don’t receive the same cues that a real camera would give us. It circumvents the presentation attack detection altogether — and that’s hard to defend against.
It’s a growing problem.

Blair Cohen: Everyone’s moving toward frictionless experiences. No one wants to download capture software. They want it to work via a web browser.
But the problem is — on the web, anyone can open developer tools and see exactly what’s happening. That makes it easier for fraudsters to inject fake content at the right moment and bypass security entirely.
So ironically, in the pursuit of frictionless experiences, we’ve opened up some serious vulnerabilities.

David Puner: A lot to consider, obviously — and that’s why we need the help of GenAI.
So how is GenAI — or AI in general — being used to enhance authentication processes and improve security?

Blair Cohen: AI is great at spotting anomalies that humans would miss. Think about a large enterprise — they might have thousands, even hundreds of thousands of transactions happening at once.
It’s impossible for a person — or even a team — to sift through all that data in real time.

Blair Cohen: But AI can.
It works quickly and accurately, and when something unusual happens — like an anomaly in a transaction pattern — it can raise a flag instantly.
So in that sense, AI is extremely valuable in protecting against fraud and authentication attacks.

David Puner: Security practitioners are using AI — but threat actors are, too.
If we’re looking at this as an AI vs. AI kind of scenario, how is AI enhancing cybersecurity measures while also enabling attackers?

Blair Cohen: It’s a double-edged sword.
The challenge for us — the “good guys” — is that our hands are tied in ways the attackers’ hands aren’t.
We’ve got to follow strict privacy and compliance rules. We can’t share certain data or collaborate freely between organizations.

Blair Cohen: But attackers can.
They’re not bound by any regulations. When one discovers an exploit, they can instantly share it across the entire fraud community and launch coordinated attacks at scale.
We don’t have that kind of latitude. So, they often stay one step ahead.

David Puner: So then — how do the good guys stay a step ahead of the bad guys?

Blair Cohen: I’m not sure that we do, David.
Like I said, our hands are tied. We can’t collaborate the same way. We can’t share exploits and threat data across enterprises the way bad actors can.
Until that changes — until our hands become untied — I think they’ll always be a step ahead.

David Puner: So that’s where it comes down to defense and best practices.

Blair Cohen: Correct.

David Puner: Looking across various sectors — like finance, government, and telecom — how do authentication challenges differ from one sector to the next?

Blair Cohen: It doesn’t just vary by sector — it varies by use case within each sector. But let’s take a high-level look.

Blair Cohen: In government, the goal is to serve all citizens — including those with accessibility issues. So they tend to offer more flexibility in their authentication workflows.
For example, a government agency might allow you to upload an image of your ID or a selfie. That’s not a best practice — but it’s done to be inclusive.

Blair Cohen: In contrast, financial services are far stricter. They won’t let you upload an image — they make you go through a camera-based, real-time workflow.
They’re all about minimizing risk and ensuring that strong defenses are in place up front.

Blair Cohen: Telecom is different again. Their focus is speed and volume — they want to get as many people through the flow and issue as many phones as possible.
So the priorities — and the vulnerabilities — vary widely by sector.

Blair Cohen: And even within sectors, the use case matters.
You’re going to treat a customer walking into a physical store differently than one interacting with you online.
In person, there’s some assurance that you’re dealing with a real human — the employee can see you. But digitally? You want to see the face, the voice, the identity document.
You need more signals to trust what’s coming through the screen.

David Puner: Sort of like when you’re buying a microphone to be on a podcast — a two and a half hour retail experience.

Blair Cohen: It was awful. They couldn’t get their computers working once I got there.

David Puner: Well, we thank you again for enduring the real-life retail experience for the sake of Security Matters. Now, when it comes to regulatory compliance and data privacy, what are the major compliance issues that you’re helping customers navigate?

Blair Cohen: There are so many new regulations out there, David. It’s kind of hard to say where to start — but I’ll give you one example.

Blair Cohen: In California, you’ve got CCPA — the California Consumer Privacy Act. And under that, a California citizen has the right to be forgotten.
So, for example, if you’re an AT&T customer, you can call them up and say, “I want you to delete all the data you have on me.”
And AT&T legally has to comply.

Blair Cohen: But how do they know who’s calling?
How do they verify that it’s really you making that request — and not someone trying to delete someone else’s data?

Blair Cohen: That’s where our technology comes in.
We allow people to image their government-issued ID and capture a selfie. We then authenticate both — making sure the document is real and that the person matches the ID.
That way, AT&T knows it’s the real Blair Cohen making the request — and not a fraudster.

David Puner: Thanks for those examples — really interesting. So then, what promising biometric authentication innovations might we see emerge in the next few years?

Blair Cohen: Oh goodness — I think we’re going to see the liveness detection problem finally solved.

Blair Cohen: Right now, spoofing attacks — like holding up a photo or video of someone else’s face — are largely solved with Level 2 presentation attack detection.
But injection attacks — where someone bypasses the camera entirely — that’s still a big challenge.

Blair Cohen: I think we’re going to have to move toward more rigorous capture processes.
We may even need to stop using browsers altogether and make people download apps again — which causes friction — but it might be necessary.
Because in browser-based flows, attackers can open dev tools and inject fake content pretty easily.

Blair Cohen: As spoofing continues — and fraudsters keep finding ways around biometric and liveness detection — we’re going to see a combination of biometric modalities.
Today, using just your face might be enough. But going forward, I think we’ll start to see a pairing of face and voice authentication.

Blair Cohen: You’ll be asked to say a random, dynamic phrase.
Then, the system will use lip-reading to confirm your mouth movements match the phrase, while also checking that the voiceprint matches what’s on file — and that the face matches the one stored.
This combo — random phrase, face match, and voice match — will be very hard for fraudsters to defeat.

David Puner: And “liveness detection” — that’s just what it sounds like? Making sure someone’s actually real and not spoofing?

Blair Cohen: Exactly. You nailed it 100%.

David Puner: As we move toward the end of the conversation here, we always like to ask: what can organizations do about all of this?
How can they prepare for the next generation of cyber threats and authentication risks?

Blair Cohen: Humans are your biggest vulnerability.
In digital channels, most enterprises already have plenty of tools deployed — tools that can detect anomalies, tools that can detect AI-generated content, and so on.
But when it comes to humans? We’re the weak point.

Blair Cohen: If you walk into a brick-and-mortar store with a fake ID created using fraud-as-a-service, it’ll look perfect. And most of the time, the human checking it won’t know the difference.
Same thing online — if someone is manually reviewing documents or faces, they have almost no ability to tell real from fake anymore.

Blair Cohen: So it’s time.We’ve got to get rid of all human processes when it comes to identity verification.
We need to deploy tools — automated systems that can analyze, detect anomalies, and flag GenAI content in real time.

David Puner: It probably goes without asking, but you mentioned fraud as a service. Fraud as a service is presumably thriving these days?

Blair Cohen: It has gone crazy.
I first discovered this about a year and a half ago. Back then, you could pay a set amount — say $200 a month — and generate an unlimited number of attacks.
But the quality of those attacks was low. You’d see grammar mistakes, logo issues, bad formatting — things like that.

Blair Cohen: But now? It’s completely different.
Today’s phishing campaigns are flawless — perfect grammar, perfect branding, even multilingual translations that are spot-on.
Everything looks legitimate.

Blair Cohen: And the price has gone up. That same fraud-as-a-service platform that used to cost $200 now goes for $2,000 a month.
But the attacks are high-quality and unlimited.
It’s like the “platinum tier” of cybercrime.

David Puner: Unlimited attacks — that’s the platinum tier.

Blair Cohen: Yep.

David Puner: Unbelievable. Although, very believable — especially considering everything we talk about on this program.

David Puner: So now more than ever, it goes back to zero trust — never trust, always verify.
How do you practice that in your day-to-day life — professionally or personally?

Blair Cohen: Just like you said: I never trust, and I always verify — no matter how convincing something may look.

Blair Cohen: At AuthenticID, we house very sensitive data from some of the world’s largest enterprises and governments.
So we take security extremely seriously.

Blair Cohen: We go through continuous security training from a company called KnowBe4.
Every single week, everyone at the company completes a new certification or training module.
We stay on top of the latest exploits and always stay vigilant.

David Puner: Does it ever surprise you — among your clients or day-to-day contacts — how inconsistent cyber hygiene can be?

Blair Cohen: I think people know what they should do — but implementing it across a massive enterprise is really tough.

Blair Cohen: I’d love to see a world where we get rid of usernames and passwords entirely.
To me, all authentication should be biometric — that’s the safest path.
But when you’ve got 70,000 employees, switching them all over to a new system? That’s not easy.

Blair Cohen: Still, if everyone moved to biometric authentication, the world would be a much safer place.

David Puner: Blair Cohen, founder and president of AuthenticID — thank you so much for coming onto the podcast.
Really appreciate it. It’s been great speaking with you.

Blair Cohen: It’s been my pleasure, and I appreciate your time today. Thank you very much, David.

David Puner: Alright — there you have it. Thanks for listening to Security Matters.
If you like this episode, please follow us wherever you do your podcast thing so you can catch new episodes as they drop.
And if you feel so inclined, leave us a review — we’d appreciate it, and so will the algorithmic winds.

David Puner: Got questions or comments?
Are you a cybersecurity professional with an idea for an episode? Drop us a line at: [email protected].

David Puner: We hope to see you next time.