💬 Cybercrime Has a New Voice—And It’s Yours

New AI-powered attacks mimic, adapt, and deceive faster than you can hang up.

The next cyber threat will look a little different than what we’re used to. It will adapt, impersonate, and act on its own. In fact, we’re facing a new era where AI agents can think, react, and deceive in real time—all while using deepfakes as their voice.

Don’t believe me? Let’s take a closer look!

Real-Time AI

My team conducts authorized red hat hacking for our clients, and the thing that concerns us the most is the rapid rise in agentic AI. This is AI that listens and responds mid-attack, changing tone, phrasing, or strategy in real time. This is a threat that we’re not quite prepared for. Think of an AI agent as a very smart chatbot that could be programmed to engage users and trick them.

These are autonomous AI attackers and bots.

They don’t just follow scripts. They pivot strategies, and manage interactions like a human attacker would. They are also faster and persuade victims based on the prompt given. 

These aren't your old-school deepfakes. Interactive digital personas trick users into trusting them. While we haven’t seen attacks like this in the wild yet, our testing indicates that this is possible today. 

Using deepfake audio in combination with agentic AI, we’ve measured how organizations respond and what percentage of a user base is unable to detect these. 

âś… The good news: The majority of a user base will be able to detect deepfake audio based on our testing.

🛑 Cause for concern: Still, as this technology matures in parallel with deepfakes, it remains a rapidly growing threat. 

How This Will Manifest

Today, it will most likely be through Audio: Think phone calls, VoIP or social media messaging that relies on audio. Initially, it will be spam and robo-calls. Think of those annoying extended car warranties calls.

I believe scams will be next, such as those SMS texts you get about unpaid tolls. It may be a callback phone number next with an interactive agent to collect your payment details.

Using deepfake combined with this technology gives way to a new level of danger. This stems from impersonation using deepfakes combined with agentic AI to social engineer employees. 

The Unique Threats of Deepfake

What’s concerning about this technology is that the conversations can be recorded and transcribed in near real time.

There’s also very few guardrails to stop these types of scams today. In our testing, we pitted our own teams against an agentic AI bot and removed all safety guardrails. The bot persistently attempted to get a user to install an application, even after being challenged repeatedly. 

There’s no kill switch for these conversations, leaving a user to outright refuse or end the call as the best options. Teaching people how to stop real-time AI is a losing battle. But, the red flags still apply no matter how advanced the technology is. 

What You Can Do Now

Train for judgment, not just awareness.

Focus on behavioral red flags as you can always find them if you look closely. Here are some obvious examples: 

  • Is there a sense of urgency or pressure to act? 

  • Does the conversation involve money or sensitive data

  • Is this type of ask irregular or unexpected? 

Additionally, I recommend the following:

  • Simulate modern threats. Replicate live AI-driven deception, not just static phishing emails.

  • Add friction at critical moments. Validate identity through multiple channels before acting on requests.

  • Educate continuously. Train your team on behavioral anomalies and trust traps.

A Final Warning

Agentic AI threatens not just our infrastructure, but our perception, judgment, and response time. The real danger is less about technical failure and more about lost trust.

âťť

Are you ready for that?

About the Author

Jason Thatcher is the founder of Breacher.ai and has a background spanning red teaming, adversary simulation, and security awareness. Breacher.ai takes a human-first approach to deepfake defense by building awareness through simulations and practical training, instead of fear tactics. Feel free to connect with me on LinkedIn!

Like What You Read? đź‘€

Breacher.ai is all about helping organizations prepare for Gen AI threats like Deepfake. If you’re curious about Deepfakes, we’d love to connect. Check us out at Breacher.ai or connect with us on LinkedIn.

Sponsored placement—because great ideas deserve a spotlight.

🚀 Boost your business with us—advertise where 10M+ AI leaders engage

🌟 Sign up for the first AI Hub in the world.

📲 Our Socials

Reply

or to participate.