A terrifying scam is spreading fast: criminals use AI to clone a loved one’s voice and call families with a fake emergency, like a kidnapping or accident, then pressure them to wire money immediately.
A recently reported case where a mother received a call that appeared to come from her daughter’s number and sounded exactly like her daughter, leading to thousands lost before she realized it was a scam.
This is not rare anymore. McAfee’s research has found 1 in 10 people surveyed said they received a message from an AI voice clone, and 77% of those victims said they lost money.
Meanwhile, the Federal Trade Commission says Americans reported $12.5B in fraud losses in 2024, with imposter scams totaling $2.95B.
Why this scam works: it hijacks your “voice = truth” instinct
Voice is emotional proof. We’re wired to trust it, especially when it sounds like our child, parent, or partner. With modern cloning tools, scammers may only need a short sample from social media videos, voicemails, or other recordings to generate convincing audio.
Analysis shows AI voice scams now require as little as 30 seconds of audio to clone a loved one’s voice, and experts explain how to spot and stop these fake emergency calls in real time.
Olga Scryaba, Head of Product at isFake.ai, puts it simply:
“Voice used to be a strong ‘trust signal.’ Now it’s just another file that can be copied. If a call tries to rush you into money or secrecy, treat the voice as unverified until you confirm it through a second channel.”
The 60-second “don’t get played” protocol for families
If you get a terrifying call from a “loved one,” do this in order:
1. Interrupt the script with a “proof question.”
Ask something a cloner won’t know: “What was the name of our first pet?” “What did we eat last Sunday?” (Avoid info visible on social media.)
2. Use a family safe word (yes, really).
National Cybersecurity Alliance-style advice reported in the story: agree on a code word or phrase for emergencies and make it universal across the family (including grandparents).
3. Hang up and call back on a saved number.
Do not trust caller ID. Call the person (or another family member) using a number you already have saved.
4. Slow the money down.
Scammers rely on urgency. Any request to wire money, buy gift cards, or send crypto during an “emergency call” is a flashing red sign.
5. If you have a recording, verify it.
If the scam left a voicemail/voice note, run it through an AI audio checker before you treat it as evidence. (This is where detection tools help when your ears can’t.)
Quick “audio red flags” (helpful, but not foolproof)
Even good fakes can slip. Still, listen for:
- Odd timing: unnatural pauses, latency before responses, or answers that dodge follow-up questions
- Too-clean speech: perfect pronunciation, limited natural breathing, weirdly consistent cadence
- Context gaps: vague details (“I’m in trouble, send money”) instead of specifics only the real person would share
As Olga notes:
“The biggest tell is usually not the sound, it’s the behavior. Real people will let you verify. Scammers will punish you for trying.”
The trust problem nobody’s ready for: “the liar’s dividend”
As deepfakes spread, bad actors can also dismiss real evidence as fake. Researchers call this the liar’s dividend. This is when the mere existence of deepfakes makes it easier to deny authentic recordings.
That’s why verification workflows matter: you want proof that holds up even when someone claims it’s AI.




