Deepfake-enabled phishing is an accelerating cyberthreat, blending human psychology with AI precision. It raises critical questions about the effectiveness of security awareness training.
Does awareness training actually reduce deepfake risks?
The short answer is yes.
Our data indicates a strong link between user awareness and reduced vulnerability to deepfake-enabled phishing attacks. But to understand why, we need to go beyond click-rates and look deeper.
Let’s start by challenging a common misconception: that training is only effective if no one ever clicks on the phishing link. That’s not how humans work, or the attackers, for that matter. A 2.5% click rate may not sound perfect, but in the world of phishing defence, it reflects strong user judgment under pressure.
The truth is, click-rate is a crude metric. It only tells you who engaged, not what happened next.
Deepfake phishing attacks are designed to manipulate action, not just attention. Fraudsters use cloned voices and synthetic video to build trust, influence emotions, and create urgency. The real goal? Get someone to do something:
Approve a wire transfer.
Share sensitive files.
Reset credentials.
Our simulations focus on action rate, not just click rates, because that’s what real attackers care about too.
💡 To put it plainly: trained users are far less likely to fall for it, and far less likely to act on it.
At first glance, 16.45% may not sound catastrophic. But context is everything:
👉 For an SMB with 200 employees, that could mean 30+ people taking harmful action in a real attack. It only takes one to trigger a fraudulent payment, leak sensitive data, or grant access to your systems. The result? Financial loss, reputational damage, and regulatory scrutiny, all from a single, preventable moment.
👉 For an enterprise, the scale multiplies, and so does the risk. A deepfake doesn’t just exploit one user. It’s designed to move laterally. That 16.45% could include someone in finance, HR, IT, or the C-suite. From there, attackers can impersonate, escalate, and entrench.
In both cases, the cost of inaction is disproportionately higher than the cost of prevention. Training does not eliminate risk, but it transforms the odds in your favour.
Generic training won’t cut it anymore. Companies need training that reflects how deepfake-enabled attacks really unfold.
We begin by leveling the room, giving everyone across departments a shared understanding of how synthetic deception works. Then, we simulate it. Live voice clones. Realistic deepfake video. Scenarios based on your organisation’s structure, roles, and risk surface.
Where we find gaps, we follow up with concise, targeted video modules delivered straight to those who need them.
This is not tick-box training. It’s experiential, function-specific, and aligned to the way modern attackers think.
Deepfakes are experiential by nature. If you’ve never seen one, you won’t know how convincing they are—or how they feel in a real moment of urgency.
Our training uses immersive case studies and live demos tailored to your teams: HR screening candidates, finance authorising payments, IT helpdesk verifying identities. We show people what it looks like, and what to do.
We run simulations quarterly, not weekly. Why? Because more does not mean better. In fact, frequent, generic phishing tests often lead to user fatigue. People click, roll their eyes, and move on.
Instead, we run fewer, harder, more realistic simulations. Deepfakes that mirror attacker tactics:
Voice
Video
Emotional nuance
Built to pressure-test your processes
Policies
Controls
💡 Example: the recent Scattered Spider attacks. They targeted IT helpdesks with simple password reset requests. We simulate the same, but layered with voice cloning and synthetic identity. Our benchmark is simple: if we can break in, attackers can too.
After each simulation, we don’t blast everyone with the same video. We deliver relevant training only to those who need it. The content is short, focused, and tied directly to the behavior we observed.
That means your team spends less time watching, and more time learning what matters.
Let’s be honest:
Is training 100% effective across every organisation? No.
Is it a silver bullet? Absolutely not.
Does it measurably reduce risk? Yes.
We recently saw a security leader express concern because their user base showed a 2.5% click rate. Honestly? That’s a win. In this threat landscape, that number reflects awareness, caution, and decision-making under pressure.
Phishing, and now deepfake-enabled phishing, has been with us for decades. We haven’t “solved” it. No single vendor, individual, or AI model ever will.
It will require layered defense of people, process, and technology working together.
And people only become part of the defense when they’ve seen what deception looks like, and know how to respond.
👉 Tell us how we’re doing! CLICK HERE to complete our reader survey.
Jason Thatcher is the founder of Breacher.ai and has a background spanning red teaming, adversary simulation, and security awareness. Breacher.ai takes a human-first approach to deepfake defense by building awareness through simulations and practical training, instead of fear tactics.
Special thanks to Aarti Samani, a Breacher.ai partner who collaborated with Jason Thatcher on this newsletter. Samani is an AI deepfake fraud prevention expert and the CEO of Shreem Growth Partners. Find her on LinkedIn.
Like What You Read? 👀
Breacher.ai is all about helping organizations prepare for Gen AI threats like Deepfake. If you’re curious about Deepfakes, we’d love to connect. Check us out at Breacher.ai or connect with us on LinkedIn.
Sponsored placement—because great ideas deserve a spotlight.
🚀 Boost your business with us—advertise where 10M+ AI leaders engage
🌟 Sign up for the first AI Hub in the world.
Reply