
Robin Panigi, a Burton, Michigan resident, was awakened from sleep by a phone call. The voice on the line was her daughter's. A man took over the call and told her the daughter had ruined a drug deal and threatened to harm her if Panigi did not send money fast.
Panigi sent $800. The scammers demanded more. A bank teller, processing the second transfer, told Panigi to hang up and call her daughter directly.
The daughter was at work, safe. The voice on the phone had been artificial intelligence.
“Since then, we've come up with a code word that my whole family knows,” Panigi told WXYZ Detroit.
A May 11, 2026, report by WXYZ 7 Action News investigative reporter Ruta Ulcinaite documented two distinct AI scam vectors hitting Michigan: voice-clone family-emergency calls targeting parents and deepfake sextortion targeting teenagers as young as 13. Both are growing rapidly. Both require families to adapt their defenses now.
Panigi's ordeal began in the middle of the night. She was asleep when her phone rang. Groggy, disoriented, and worried, she answered.
The voice on the line was unmistakable. It was her daughter. The voice was crying. It was panicked. It said the kinds of things a daughter in danger would say.
Then a man's voice took over. He told Panigi that her daughter had ruined a drug deal. He threatened to harm her daughter if Panigi did not send money immediately.
“I was awoken from my sleep and wasn't really thinking straight,” Panigi told WXYZ. “And then when I heard my daughter, it was her voice.”
The threat worked. Panigi sent $800. The scammer demanded more. Panigi went to her bank to send a second payment. That is where the scam stopped. A bank teller, processing the transfer, noticed something wrong.
“She said 'call your daughter and see if this is real,'” Panigi recalled. “And I did, and she was safe at work.”
The voice on the phone had been a real-time AI voice clone, generated from a short audio sample of Panigi's daughter, likely scraped from a social media video, a voicemail greeting, or a family post.
The technology behind Panigi's call is the same voice-cloning capability documented in our coverage of the Vancouver, Washington kidnapping scam and the FBI Charlotte deepfake warning.
Scammers need as little as three to ten seconds of a person's voice to generate a convincing clone. They harvest these samples from public TikTok, Instagram Reels, and YouTube Shorts; voicemail greetings; school performance videos posted online; and family videos on public Facebook accounts.
Once the model is trained, the scammer can speak into a microphone, and the AI transforms their voice into the target's voice in real time. The scammer can respond to questions, adjust emotional tone, and extend the call for as long as necessary.
Panigi was awakened from sleep, disoriented, and not thinking clearly. The scammer exploited exactly that vulnerability. The call came at a time when she was least equipped to question it.
The second vector WXYZ documented is in St. Clair County, southeast of Detroit. Sheriff Mat King told WXYZ that his department has seen two recent cases of AI-generated nude deepfakes used to extort teenagers, including children as young as 13.
Scammers obtain real photographs of local teens from public social media accounts, school athletic websites, or yearbook pages. Using AI image generators, they create fabricated images. They then contact the teen, demand payment to suppress the images, and threaten to share them with friends, family, and classmates if the teen does not pay.
“The point is the AI is so realistic, these teenagers believe, and they start to panic,” King said. “It's very frustrating that someone lacks a moral compass and doesn't care that they could really, tragically impact someone's life.”
The psychological impact on teen victims is severe. Many are too ashamed to tell parents or teachers. They may attempt to pay the scammers themselves, using money from part-time jobs or savings. They may become trapped in a cycle of repeated demands.
King's warning echoes a national trend. The FBI has documented a dramatic rise in sextortion cases targeting minors, both those involving real images coerced from the victim and those involving AI-generated fakes.
Khalid Malik, an AI professor at the University of Michigan-Flint College of Innovation and Technology, told WXYZ that the underlying capability gap is structural. AI image and voice generation have become so good that even experts cannot reliably distinguish real from fake.
“You cannot tell with the naked eyes which one is real and which one is fake,” Malik said. “You require now no skills, no money to create any deepfake. Certainly, the time has come that we need to be cautious, we need to verify every voice and any image or any video that we interact with.”
Malik founded a University of Michigan-affiliated deepfake-detection research project in 2024. His warning aligns with the broader research consensus: the barrier to entry for creating convincing deepfakes has fallen to near zero. A scammer with no technical expertise, a few dollars, and a publicly available photo or voice sample can generate fabricated content in minutes.
“The technology has democratized fraud,” Malik said. “What used to require a team of experts now requires a single person with a laptop.”
The teen sextortion vector documented by Sheriff King uses three distinct AI capabilities in a predictable sequence.
Step 1: Photo Harvesting. The scammer first needs a real photograph of the target. Sources include public social media profiles, school or sports websites, yearbook photos, and community event photos. The scammer does not need anything compromising to start. They need only a clear photograph of the teen's face and enough identifying information to contact them.
Step 2: AI Image Generation. Using an AI image generator, many of which are free or cost a few dollars per month, the scammer creates a fabricated image of the target. The scammer uploads the real photograph, prompts the AI, and the AI produces a synthetic image. As Sheriff King told WXYZ, “the AI is so realistic, these teenagers believe.”
Step 3: Extortion Contact. The scammer contacts the teen through social media direct messages or encrypted messaging apps. The message includes the fabricated image as “proof,” a demand for payment (usually via gift cards, cryptocurrency, or payment apps), a threat to share the image with friends, family, and classmates, and a tight deadline to create urgency.
Step 4: The Psychological Trap. The scam works because of the teen's emotional response: panic, shame, and fear of exposure. The teen sees an image that appears to show them. They may not understand that AI can generate such images. They may be too ashamed to tell anyone. The scammer counts on the teen's silence.
Michigan is not unique. The same two scam vectors are appearing across the country, driven by the same falling cost of voice-cloning and image-generation technology.
AuthentiLens has documented identical voice-clone family-emergency scams in Vancouver, Washington, where a father received a call from his daughter's cloned voice saying “Dad, they've got me”; and in Burton, Michigan, where Panigi received a call from her daughter's cloned voice saying she was in danger. Multiple state FBI field offices have warned that voice cloning now appears in “most scams.”
The script is nearly identical across every case: a cloned voice of a loved one in distress, a demand for money, a threat of harm, and an instruction not to call anyone. The scam is designed to exploit the parent's panic and prevent verification. The FBI Internet Crime Complaint Center (IC3) tracks these family-emergency voice-clone calls as a priority category.
The FBI issued a formal warning about sextortion targeting minors, noting that both real-image extortion (where the victim is coerced into sending real images) and AI-generated extortion (where the images are fabricated) are on the rise.
The National Center for Missing & Exploited Children (NCMEC) has reported a sharp increase in AI-generated child sexual abuse material referrals. The technology has made it possible to create convincing fabricated content without ever harming a real child, but the harm to the victim whose likeness is used is still real.
The end of the “caller ID” defense. Parents were taught to trust calls from known numbers. Voice cloning has eliminated that defense. The call that appears to come from your daughter's number may use caller ID spoofing. The voice on the other end may be a real-time clone of your daughter's voice.
The end of the “I would know their voice” defense. Parents believe they would recognize their child's voice immediately. They are right, but that is the vulnerability. The scammer is not using a different voice. They are using a clone of the child's voice. The cloned voice may not be perfect, but in a moment of panic, especially when the parent is awakened from sleep, the emotional response overrides the analytical brain.
The end of the “it's obviously fake” defense. Teens believe they would recognize a fake image. But modern AI image generators have improved dramatically. The artifacts that were obvious a year ago are now subtle or gone. A teen looking at a fabricated image may not be able to spot the telltale signs. They may believe the image is real, or fear that others will believe it is real even if they know it is fake.
Professor Malik's warning is worth repeating: “You require now no skills, no money to create any deepfake.” The barrier has fallen so far that the defenses families relied on for decades are no longer sufficient. New protocols are required, and the families who build them now are the ones who will not lose $800, or $69,000, or $1.6 million to a voice clone or fabricated image.
Read our guide on how to tell if a phone call is a scam and how to tell if a video is a deepfake for the technical warning signs behind both attack vectors.
The AuthentiLens editorial team has distilled the Michigan cases, the expert commentary, and our broader research into concrete protections for families.
This is Panigi's takeaway and the single most consequential defense against voice-clone family-emergency calls.
Choose a word or short phrase that would not be guessable from social media. Not the name of a pet. Not a family member's name. Not a notable date. Agree on it in person. Make sure every family member knows it.
If anyone calls claiming to be a relative in distress, even if the voice sounds exactly right, demand the code word before taking any action. Do not accept “I don't have time” or “just send the money.” A real relative will know the word. A voice clone will not.
This rule extends to grandparents too. See our guide on how to protect elderly parents from scams.
For teenagers: make sure your teens know the code word and know to ask for it if anyone, even someone whose voice sounds exactly like yours, calls in a panic asking for money or help.
Panigi's bank teller is the reason her loss stopped at $800 instead of multiplying. The teller's advice, “call your daughter and see if this is real,” is the universal protocol.
Do not stay on the line. Do not argue. Do not try to reason with the scammer. Hang up. Then call the relative's known number, the number you already have saved in your phone, not a number the caller provided.
If you cannot reach the relative, call another family member. If you still cannot verify, call the police non-emergency line. A real emergency does not collapse if you take sixty seconds to verify. A scam will.
For more details, see our guide on how to tell if a phone call is a scam.
This is the single most important prevention step for deepfake sextortion. Scammers cannot fabricate images of your teen if they cannot find clear photos of your teen.
A scammer needs only one clear photograph and a name to feed an AI image generator. Do not make that photograph easy to find. For more guidance, see our guides on how to spot a fake social media profile and how to tell if a photo is fake or AI-generated.
This conversation is uncomfortable. It is also essential. Tell your teen:
The FBI's warning on sextortion emphasizes that the most important step is reporting. Scammers target dozens or hundreds of teens simultaneously. A single report can help law enforcement identify the perpetrator and prevent other victims.
Take It Down (takeitdown.ncmec.org) is a free service operated by the National Center for Missing & Exploited Children. It helps minors remove images, both real and AI-generated, from participating platforms including Facebook, Instagram, TikTok, and others.
The user submits a digital fingerprint of the image (a hash, not the image itself). The fingerprint is shared with participating platforms, which use it to detect and remove copies. The service does not require uploading the image. Take It Down is available to anyone under 18. It is free and confidential.
If your family has been targeted by either scam vector, report it immediately.
For voice-clone family-emergency scams: report to the FBI's Internet Crime Complaint Center (IC3) at IC3.gov, to your local police department, and to the FTC at ReportFraud.ftc.gov.
For deepfake sextortion targeting minors: report to the FBI's IC3 at IC3.gov (the FBI tracks sextortion of minors as a priority category), to NCMEC at report.cybertip.org, and to local law enforcement.
You are not expected to become a voice-clone detection expert or a deepfake forensics specialist. That is what AuthentiLens is for.
If your teen has received a threat involving AI-generated images, take these steps immediately.
Robin Panigi shared her story with WXYZ for one reason: to warn others.
She lost $800, money she will not get back. She also gained a code word and a family protocol that will protect her family from future attempts.
“I was awoken from my sleep and wasn't really thinking straight,” she said. That is the vulnerability. That is what scammers target.
The code word takes sixty seconds to establish. It costs nothing. It works. Every family reading this article has the same opportunity Panigi took. Set the code word tonight. Audit your social media privacy settings this weekend. Have the sextortion conversation with your teens.
The scams are not stopping. The scammers are getting better. But families who prepare are families who do not lose their savings to a voice clone.