AI Voice Clone Kidnapping Scam: ‘Dad, They’ve Got Me’
A Vancouver, Washington dad heard his teenage daughter’s cloned voice in a fake kidnapping call this week — the second AI voice scam Vancouver police logged that same day.

What happened
On Thursday, a Vancouver, Washington father answered his phone and heard his teenage daughter's voice say, “Dad, they've got me.” She sounded terrified. She sounded like she was crying. A man then took over the call: “If you want to see your daughter again, follow my instructions. Don't do anything stupid. Don't call anybody. Don't text anybody.” The father paused. He verified. And he saved himself from becoming another statistic in the fastest-growing category of AI-enabled fraud in America. His was the second AI voice-clone scam Vancouver police logged that same day.
Part One: What happened — the Vancouver call
In a May 1, 2026, KGW News report by reporter Jake Holter, the Vancouver, Washington father — whose family the station kept anonymous for their safety — described receiving a call that will haunt him for the rest of his life. The call came from a restricted number. The father answered, as most parents would. What he heard stopped him cold.
“Dad, they've got me.”
It was his teenage daughter's voice. He knew her voice. He had heard it every day for years. It was her. The voice was crying. It was panicked. It sounded exactly like a terrified teenager in the hands of someone who meant her harm.
Then the man's voice came on. The scammer — calm, controlled, rehearsed — delivered his demands: “If you want to see your daughter again, follow my instructions. Don't do anything stupid. Don't call anybody. Don't text anybody.”
The father later told KGW that in that moment he was not thinking about AI. He was not thinking about voice cloning. He was not thinking about the possibility that the voice might be fake. He was thinking about his daughter. But something made him pause. He did not send money. He did not follow the instructions. Instead, he took a moment to verify.
He called his daughter directly on a number he already knew. She answered. She was safe. She had never been taken. The call was a complete fabrication — a “virtual kidnapping” powered by AI voice cloning.
“I knew that this was out there, but I didn't realize that it could be used this way,” the father told KGW.
According to the station, his was the second AI voice-clone scam Vancouver police logged on that same day. Two families, two calls, two cloned voices — all in a single 24-hour period in one mid-sized Washington city. The father acted correctly. He did not pay. He verified. But the KGW report makes clear that not every victim is so lucky.
Part Two: How the scam works — a technical and tactical breakdown
The Vancouver case is a textbook example of a rapidly growing category of AI-enabled fraud that law enforcement officials across the country are warning about. The voice the father heard was not a recording of his daughter's voice replayed. It was a synthetic voice generated in real time by an AI model trained on a short audio sample.
Step 1: Audio harvesting
The scammer first needs a sample of the target's voice. In most cases, this sample is harvested from publicly available sources. Jon Down, an AI professor at the University of Portland who spoke with KGW, explained: “They likely would've gotten her voice from somewhere on the internet. It could be maybe a YouTube recording she'd made or some other social media post, anything along those lines to get just a really pretty short snippet of her voice and then can use that and turn it into really whatever they want.”
Common sources scammers harvest from include:
- Social media videos: TikTok, Instagram Reels, YouTube Shorts, Facebook videos
- Voicemail greetings that include the person's name
- School or sports videos: choir performances, debate team recordings, sports interviews
- Family videos posted online: birthdays, holidays, recitals, vacations
Modern voice cloning technology requires only three to ten seconds of clean audio to produce a convincing clone. A single 15-second TikTok video is more than enough.
Step 2: Model training
Once the audio sample is harvested, the scammer feeds it into a voice-cloning model. These models are widely available for free or for a few dollars per month. Some run entirely in a web browser. Others can be downloaded and run on a standard laptop. The model analyzes the unique characteristics of the target's voice:
- Pitch range and variability
- Formant frequencies (the resonant frequencies of the vocal tract)
- Cadence and rhythm patterns
- Accent markers and regional pronunciation
- Breathing rhythms and pauses
- Emotional inflection patterns
The training process takes anywhere from a few seconds to a few minutes, depending on the model and the length of the audio sample.
Step 3: Real-time voice generation
Once the model is trained, it can generate new speech in the target's voice from any text input. Advanced models can operate in real time, generating speech as fast as a human can speak — with latencies as low as 200 milliseconds. This is what makes the scam so effective. The scammer does not need to prerecord a message. They can speak into a microphone, and the AI transforms their voice into the target's voice in real time. The scammer can then:
- Respond to the victim's questions naturally
- Adjust the emotional tone based on the victim's responses
- Extend the call for as long as necessary to extract money
- Vary the script dynamically
In the Vancouver case, the scammer used this technology to create a brief opening from the “daughter” — “Dad, they've got me” — before taking over the call himself. That brief sample was enough to convince the father that his daughter was genuinely in danger.
The tactical process: the virtual kidnapping script
The technical capability is only half of the scam. The other half is the social engineering script — the psychological manipulation designed to prevent the victim from verifying. Anita Sarma, a computer science professor at Oregon State University interviewed for the KGW report, explained: “Social engineering is what happens when a family member calls you, and they are in panic mode and in a very stressful situation.”
This is a four-phase impersonation scam executed with AI precision. The script follows a predictable pattern:
Phase 1: The shock opening. The scammer begins with a brief audio clip from the cloned voice: “Dad, they've got me.” Or “Mom, help.” Or “Grandma, I'm scared.” This opening is designed to do one thing: trigger an immediate emotional response that bypasses rational thought. The victim hears the voice of their loved one in distress, and their brain goes into fight-or-flight mode.
Phase 2: The demand. A second voice — the scammer — takes over. The demands are always urgent, always threatening, and always designed to prevent verification: “If you want to see your daughter again, follow my instructions.” “Don't call anybody. Don't text anybody. The phone is being monitored.” “If you call the police, she dies.” “You have 30 minutes to send the money, or we will hurt her.”
Phase 3: The payment request. The scammer then demands payment — typically through wire transfer, cryptocurrency, gift cards, or payment apps like Zelle or Venmo. The amounts vary, but they are usually set at a level that is painful but potentially affordable: $1,000 to $10,000.
Phase 4: The threat escalation. If the victim hesitates, the scammer escalates. They may play another brief clip from the cloned voice — a scream, a cry for help. They may threaten to hurt the victim's loved one in specific ways. They may set a hard deadline and count down. The goal is to keep the victim in a state of panic until the money is sent.
Why the verification step is critical
The Vancouver father escaped because he paused. He did not send money. He found a way to verify that his daughter was safe. But the scam is designed to prevent exactly that pause. The scammer's instructions — “Don't call anybody. Don't text anybody” — are not random. They are a deliberate tactic to prevent the victim from picking up the phone and calling their loved one directly.
If the victim obeys, they will never discover that the loved one is safe. They will remain in the scammer's emotional control until they send the money — and sometimes even after. The Vancouver father broke the script. He did not obey. He called his daughter. She answered. The scam collapsed.
Why it matters
Part Three: The national pattern — this is not isolated
The Vancouver case is not an isolated incident. It is one of dozens of similar cases documented across the country in the past year.
CBS San Francisco: the worsening trend
In a January 2026 report, CBS San Francisco covered the same phenomenon, noting that AI-powered virtual kidnapping scams were “on the rise” across Northern California. The report quoted law enforcement officials who said that scammers are using publicly available social media content to clone voices and then demand ransom from family members. CBS noted that the scams are becoming more convincing as AI technology improves.
WHSV: the mother who lost thousands
In a March 2026 report, WHSV covered the case of a mother in Virginia who lost thousands of dollars to a fake kidnapping call using an AI clone of her daughter's voice. The mother received a call from what sounded exactly like her daughter. The caller claimed to have taken the daughter and demanded ransom. The mother, panicked, sent money before she had a chance to verify. By the time she reached her daughter — who was safe at school — the money was gone. WHSV reported that the mother's case was one of several in the Shenandoah Valley in a single month.
InvestigateTV: the national warning
In a January 2026 national report, InvestigateTV documented AI voice cloning scams targeting families across multiple states. The report included interviews with victims, law enforcement officials, and technology experts. InvestigateTV found that the scams are organized, cross-jurisdictional, and growing rapidly. The same phone numbers and payment accounts are being used across multiple states, suggesting that the scammers are operating at scale.
The FBI's warning
As AuthentiLens reported in our coverage of the FBI Charlotte warning, the FBI has made AI-enabled fraud — including voice-clone kidnapping scams — a 2026 enforcement priority. James Kaylor, the Supervisory Special Agent at the FBI's Charlotte field office, told CBS 17 that AI tools now appear in “most scams” and that voice cloning is leading the threat list. “You don't have to be super tech savvy to use these AI tools,” Kaylor said. “So, the prevalence of them in these scams has just skyrocketed.” That warning aired just days before the Vancouver calls. As we reported in our coverage of the FBI's 2025 IC3 report, the bureau logged $893 million in AI-enabled fraud losses in 2025 — the first time AI was broken out as its own category. And as the FTC's 2025 social media scam data showed, AI voice cloning is now the leading threat vector in a category that cost Americans $2.1 billion last year.
Part Four: The psychology of virtual kidnapping scams
Understanding why these scams are so effective requires understanding the psychology of fear.
The evolutionary hardwiring
Humans are evolutionarily hardwired to respond to threats to their offspring with immediate, overwhelming action. A parent who hears their child in distress does not stop to analyze. They act. This hardwiring was essential for survival in the ancestral environment. A parent who paused to verify that a predator was real before rescuing their child would not have passed on their genes. But in the modern environment, that same hardwiring is a vulnerability. Scammers exploit it directly.
The role of panic
Panic is not a state of mind conducive to critical thinking. When a person is in panic, the prefrontal cortex — the part of the brain responsible for rational analysis, planning, and impulse control — is effectively offline. The amygdala — the fear center — takes over. In this state, the victim is not capable of asking, “Is this voice real?” They are only capable of asking, “How do I make the fear stop?” The scammer's demands — send money, don't call anyone, don't verify — are designed to keep the victim in this state until the money is transferred. The same emotional manipulation playbook appears across scam categories, from the AI wedding ring scam to grandparent impersonation calls. What makes the voice-clone kidnapping variant uniquely dangerous is that it bypasses visual skepticism entirely. There is nothing to see — only something to hear — and the voice sounds exactly right. These are among the most sophisticated social engineering tactics currently deployed at scale.
The scarcity of verification opportunities
The scammer's instruction not to call anyone is a crucial part of the script. If the victim obeys, they will not discover that their loved one is safe. But even if the victim disobeys, verification can be difficult. A parent may not have their child's phone number memorized. A grandparent may not know how to use caller ID. A spouse may be at work and unable to answer. The scammer's goal is to exploit these gaps. They want the victim to feel that the only way to resolve the situation is to pay.
Why the Vancouver father succeeded
The Vancouver father succeeded because he did two things that broke the scammer's script. He paused — he did not immediately obey the scammer's demands. He took a moment, maybe just a few seconds, to think. Then he verified — he called his daughter directly on a number he already knew. She answered. The scam collapsed. These two actions — pause and verify — are the difference between becoming a victim and walking away. Real emergencies do not collapse if you take 60 seconds to verify. A scam will.
How to protect yourself
The following protections are distilled from the Vancouver case, the expert commentary from KGW, and AuthentiLens's broader research into AI voice-clone fraud.
1. Set a family code word tonight
This is the single most effective defense. Choose a word or short phrase that would not be guessable from social media — not your pet's name, not your street, not anything that appears in your family's online presence. Agree on it in person. Make sure every family member knows it.
“Have these conversations at the dinner table,” Anita Sarma told KGW. “Talk about ‘Hey, if someone is in an emergency, we should use this safe word.’” If a relative calls with an emergency — real or fake — demand the code word before taking any action. A real relative will know the word. A voice clone will not. This is especially critical when protecting elderly parents from scams, as older adults are disproportionately targeted by grandparent voice-clone calls and are more likely to act quickly under pressure.
2. Audit what your family publishes online
Rudrajit Choudhuri, a Ph.D. student in computer science at Oregon State University, told KGW that families need to be proactive about their digital footprint: “There are settings on every social media platform on everything that you put your content on to stop these models from training on it.” Set Instagram, TikTok, and Facebook accounts to private. Avoid posting children's voices in public videos. Remove or restrict voice-bearing content of minors. Consider removing voicemail greetings that include your name. For parents of teenagers, have a conversation about what they are posting — a fun TikTok video is training data. See our guide on online safety tips for families for a full checklist.
3. If you receive a panic call, slow down and verify on a separate channel
The scammer's instructions — “Don't call anybody” — are designed to prevent verification. Disobey them immediately. Hang up. Call the relative's known number — not the number that just called you. If you cannot reach the relative, call another family member or the police non-emergency line. For more detail on identifying these calls in real time, see our guide on how to tell if a phone call is a scam.
4. Demand the code word every time
When you receive a call from someone claiming to be a relative in distress — or from someone claiming to hold them hostage — demand the code word before doing anything else. If the caller refuses, dodges, or says “just send the money first,” that is the scam confirming itself. Hang up. Also remember that deepfakes can replicate faces as well as voices, so even a video call is not proof of identity without the code word.
5. Report virtual kidnapping calls to police and the FBI
Call your local police department to log the call and flag the number. File a complaint with the FBI's IC3 at IC3.gov. Your report could help connect the dots between calls across jurisdictions. Two families in Vancouver received calls the same day — a pattern that only becomes visible when every victim reports.
6. Scan it with AuthentiLens
If you receive a suspicious voicemail or audio message, upload the clip to AuthentiLens. We analyze it for voice-clone artifacts: unnatural frequency patterns, inconsistent breathing, synthesized speech markers, and other telltale signs of AI generation — in seconds, before you reply, before you send money, before the panic takes over.
If you have already been scammed
Contact your bank or payment provider immediately — wire transfers and credit card payments may be reversible if caught quickly. File a police report to create a paper trail. File a complaint with the FBI's IC3 at IC3.gov. Tell your family — shame is the scammer's greatest ally, and sharing your experience may protect others from the same call. You did nothing wrong. You were targeted by professionals using the most advanced tools available.
Sources
- ‘Dad, they’ve got me’: Vancouver dad feared for daughter’s life after scammers used AI to clone her voice — KGW News (Portland)
- AI scammers clone daughter’s voice in attempt to extort Vancouver man — KGW (YouTube)
- AI phone scam creates voice replicas of ‘kidnapped’ loved ones to demand ransom — CBS San Francisco
- Mom loses thousands to fake kidnapping call using AI clone of daughter’s voice — WHSV
- AI voice cloning scams target families with fake kidnapping calls — InvestigateTV
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
