FBI: AI Deepfake Scams Have ‘Skyrocketed,’ Voice Cloning Leads
FBI Charlotte’s Supervisory Special Agent James Kaylor warns AI deepfake scams ‘have just skyrocketed,’ with voice cloning powering ‘most scams’ and Resemble AI estimating $1.3 billion in 2025 deepfake losses.

What happened
James Kaylor is a Supervisory Special Agent at the FBI's Charlotte field office. He has watched the prevalence of AI tools inside ordinary scam reports go from rare to dominant in the span of two years.
“You don't have to be super tech savvy to use these AI tools,” Kaylor told CBS 17 reporter Mary Smith. “So, the prevalence of them in these scams has just skyrocketed.”
The CBS 17 investigation cited the Resemble AI 2025 Deepfake Threat Report, a private-sector analysis estimating that Americans lost nearly $1.3 billion to deepfake-related scams in 2025. The FBI's own Internet Crime Complaint Center (IC3), in its 2025 annual report released in April 2026, documented $893 million in AI-enabled fraud. It was the first time in 25 years of reporting that the FBI broke AI out as its own fraud category. As we reported in our coverage of the IC3's 2025 annual report, the agency noted the true total is almost certainly higher due to underreporting.
Voice cloning is now the lead threat vector
Kaylor singled out voice cloning as the mechanism driving the biggest share of current AI scams. Modern voice-cloning models can replicate a person's voice from as little as three seconds of audio. Scammers harvest that audio from public social media videos, voicemail greetings, podcast appearances, and news interviews.
The technology works in real time. The scammer speaks into a microphone and the AI transforms their voice into the target's voice in milliseconds, allowing them to hold a live conversation, respond to questions naturally, and adjust their emotional tone based on the victim's reactions. Three years ago this capability was limited to state-level actors. Today it is available to anyone with a credit card and an internet connection.
The language barrier is gone
Kaylor described a second shift that has changed the fraud landscape permanently: AI has removed the language barrier that once limited foreign-based scam operations.
“You don't have language barriers to generate a conversation with somebody,” Kaylor said. “So AI is going to help you make sure the syntax is right, make sure the grammar is right.”
The grammatical errors and awkward phrasing that once made phishing emails easy to identify are gone. AI generates flawless English, or any other language, on demand. As we noted in our guide to signs of a phishing email, the grammar tell was one of the most reliable defenses ordinary consumers had. It no longer applies.
The senior officials impersonation campaign
The CBS 17 segment was not the FBI's first public warning on this threat. In May 2025, the IC3 issued a formal Public Service Announcement (PSA 250515) warning that scammers were using AI-based voice cloning to impersonate senior U.S. government officials. The operation combined caller ID spoofing with voice clones generated from publicly available audio, including speeches, press conferences, and media appearances. Victims were told they owed fines or fees, payable in gift cards or cryptocurrency.
The PSA noted that the technology had advanced to the point where even the officials' own staff could not reliably distinguish the cloned voice from the real one in a phone call.
The $1.3 billion estimate in context
The Resemble AI figure and the FBI's $893 million differ because they use different attribution standards. The IC3 only counts a loss as AI-enabled when investigators can definitively prove AI involvement. Resemble AI's methodology captures a wider range of cases, including those where AI use is likely but not formally confirmed.
Both figures share the same ceiling problem: fewer than 10 percent of fraud victims file a complaint. The FBI, FTC, and private researchers all agree the real total of AI-driven fraud losses in 2025 was probably between $5 billion and $10 billion when unreported cases are included. As we reported in our coverage of FTC data showing $2.1 billion in social media scam losses, voice cloning is now the leading mechanism in romance fraud and grandparent scams on social platforms as well.
Part Two: How voice cloning actually works
For anyone encountering the term for the first time, here is a plain explanation of the technology and why it has become the scammer's most powerful tool.
The three-second rule
AI voice-cloning technology can replicate a person's voice using as little as three seconds of audio. That is shorter than most voicemail greetings. It is shorter than a single sentence in a TikTok video. Scammers harvest these short clips from public social media videos, voicemail greetings, podcast appearances, news interviews, corporate earnings calls, webinars, and family videos posted online. Once they have the sample, they can feed it into a voice-cloning model and generate new speech in that voice within minutes.
The technical process
Voice cloning works through a type of generative AI called a neural vocoder. The process has three stages. First, feature extraction: the AI analyzes the audio sample to identify unique characteristics of the target's voice, including pitch range, formant frequencies, cadence patterns, accent markers, breathing rhythms, and emotional inflection. Second, model conditioning: those extracted features tune a pre-trained speech generation model to produce speech that sounds like the specific target. Third, real-time synthesis: the conditioned model generates new speech in the target's voice from any text input, with latencies as low as 200 milliseconds.
Real-time cloning in live phone calls
The most sophisticated scams now use real-time voice cloning during live calls. The scammer speaks into a microphone and the AI transforms their voice into the target's voice in milliseconds. This lets the scammer respond to questions naturally, adjust emotional tone based on the victim's reactions, extend the call as long as needed, and handle unexpected questions without breaking character. Three years ago this capability was limited to state-level actors. Today it costs as little as $10 a month. Some services are free.
How the technology has evolved
- 2022: Required minutes of high-quality audio. Results were robotic and clearly fake.
- 2023: Required 30 to 60 seconds. Results were recognizable but still had artifacts.
- 2024: Required 10 to 15 seconds. Results were convincing to most listeners.
- 2025: Requires 3 seconds. Results are indistinguishable from real speech to the average listener.
- 2026 (current): Real-time cloning is widely available. Scammers can conduct live conversations in another person's voice.
Security researchers expect that by 2027, voice cloning will be indistinguishable even to forensic analysis, making audio evidence unreliable for law enforcement.
Why it matters
Kaylor's warning captures what researchers and fraud investigators have been watching build for two years: the traditional authenticity signals that consumers learned to rely on are now synthesizable at near-zero cost. A name, a phone number, a voice, and a face on a video call are no longer proof of identity.
The end of the grammar tell
For two decades, consumer protection organizations taught people to identify scam emails by their grammatical errors and awkward phrasing. That advice worked because most overseas scammers operated in a second language and their errors were visible.
AI large language models have made that tell obsolete. An email that arrives today may be grammatically perfect, stylistically consistent with previous legitimate communications, and contextually relevant to your actual account or relationship. It may still be a scam. The content quality of a message is no longer evidence of its legitimacy.
Kaylor put it plainly: “If these scammers get into somebody's email accounts, they can start sending emails and requesting checks from the company to the bad guys using AI-generated emails.”
The end of the voice tell
Consumers were also taught to be suspicious of calls from unknown numbers but to trust calls from numbers they recognized. Voice cloning has eliminated that heuristic.
A call appearing to come from your daughter's number may be a scammer using caller ID spoofing. The voice on the other end may be a real-time clone generated from 15 seconds of audio she posted to social media last week. Our guide on how to tell if a phone call is a scam outlines the behavioral signals that survive even when the voice itself sounds real.
Deepfake video calls are now in play
The same technology applies to video. Real-time face-swap deepfakes are now being used in video conference calls to impersonate executives, lawyers, and government officials. The IC3 PSA documented cases where victims on a video call with what appeared to be a known contact were actually speaking with an AI-generated face-swap running over the scammer's live camera feed. Our guide on how to tell if a video is a deepfake covers the visual artifacts that current technology still struggles to hide, but those artifacts are narrowing with each model generation.
The acceleration is documented across multiple data sources
The Resemble AI report puts deepfake fraud losses at a compound annual growth rate of over 150 percent since 2022, when the total was approximately $200 million. At that rate the figure exceeds $3 billion by 2027. Deloitte projects total AI-driven fraud losses will reach $40 billion globally by the end of the decade.
The individual cases are already reaching those scales. As we reported in our coverage of the Long Island retiree who lost $300,000 to an AI crypto scam and in our earlier report on Equifax's finding that AI fraud is now industrialized, these are not isolated incidents. They are the visible tip of a system operating at industrial scale.
AI-generated personas are the delivery layer
Voice cloning and deepfake video are often combined with fully AI-generated social media personas that establish trust before the call is ever made. A victim who has been chatting for weeks with what they believe is a real person is far more vulnerable when that “person” calls with an emergency. Our guide on AI personas on social media explains how these complete fabricated identities are constructed and deployed.
As we also reported in our coverage of the AI wedding ring scam targeting lost-item posters, the same AI image-generation playbook appears across fraud categories that seem unrelated on the surface. The common thread is the use of AI to manufacture credibility faster than a victim can verify it.
Part Three: What the numbers mean
The CBS 17 investigation cited multiple data points from different sources. Understanding what each figure measures and where it comes from is essential for grasping the real scale of the problem.
$1.3 billion in deepfake losses (Resemble AI)
Resemble AI's 2025 Deepfake Threat Report estimated that Americans lost nearly $1.3 billion to deepfake-driven fraud in 2025. The $1.3 billion figure covers voice-clone grandparent scams, CEO voice-clone wire fraud, deepfake video calls in corporate settings, and AI-generated phishing campaigns. Deepfake fraud losses have grown at a compound annual rate of over 150 percent since 2022, when the total was approximately $200 million. If that rate continues, deepfake losses could exceed $3 billion by 2027, aligning with Deloitte's projection of $40 billion in total AI-driven fraud losses.
$893 million in AI-enabled fraud (FBI IC3)
The FBI's 2025 IC3 Annual Report documented $893 million in AI-enabled fraud. This is a narrower figure because the FBI only counts cases where investigators can definitively prove AI was involved. The breakdown by category:
- Investment scams with AI component: $632 million
- Business email compromise with AI-generated content: $30 million
- Other AI-enabled fraud (romance, tech support, impersonation): $231 million
This is the first time in 25 years of IC3 reporting that the FBI broke AI out as its own fraud category. The agency noted the true number is “almost certainly higher” due to underreporting and attribution challenges.
$2.1 billion in social media scams (FTC)
As we reported in our coverage of the FTC's April 27 release, Americans lost $2.1 billion to scams that began on social media in 2025. Of that total, $1.1 billion came from investment scams promoted through social media ads and direct messages. Sixty percent of romance scam victims said the fraud started on a social media platform. Facebook was the largest single scam-origin platform, followed by WhatsApp and Instagram.
The underreporting problem
All of these figures share a ceiling problem: fewer than 10 percent of fraud victims file a complaint. Shame, embarrassment, and the belief that nothing can be recovered keep most cases in the dark. For romance scams specifically, the underreporting rate is even higher. For grandparent voice-clone scams, elderly victims may not know how to file a complaint or may be protecting the “grandchild” they believed was in trouble. The real total of AI-driven fraud losses in 2025 is likely between $5 billion and $10 billion when unreported cases are included.
Part Four: The three tells that no longer work
Kaylor's warning captures the most important shift in the fraud landscape in a decade: the traditional signals consumers learned to rely on are now synthesizable at near-zero cost.
The end of the grammar tell
For years, consumer protection organizations taught people to identify scam emails by their grammatical errors and awkward phrasing. AI large language models have made that tell obsolete. An email today may be grammatically perfect, stylistically consistent with previous legitimate communications, and contextually relevant to your actual account or relationship. It may still be a scam. The same is true for text messages. The “wrong number” text that once contained obvious errors is now flawless. As we noted in our guide to signs of a phishing email, the grammar tell is now an artifact of the past.
The end of the voice tell
Consumers were taught to be suspicious of unknown numbers but to trust voices they recognized. Voice cloning has eliminated that heuristic. There is no telltale robotic sound anymore. Modern voice clones breathe, hesitate, use filler words, and laugh and cry convincingly. Our guide on how to tell if a phone call is a scam outlines the behavioral signals that survive even when the voice itself sounds real.
The end of the face tell
The same erosion has reached video calls. In the $25 million Arup case in Hong Kong in 2024, scammers used a deepfake video call to impersonate the company's CFO. Multiple employees saw the fake CFO on a video call, heard his voice, and authorized the wire transfer. No one noticed anything wrong. Deepfake face-swap technology now produces results convincing enough to fool finance teams in real time. Our guide on how to tell if a video is a deepfake covers the visual artifacts that current technology still struggles to hide, but those artifacts are narrowing with each model generation.
How to protect yourself
The FBI's Charlotte field office, the IC3, and the CBS 17 investigation together point to seven concrete protections. None of them require technical knowledge. All of them are actionable today.
1. Set a family code word now
This is the single most effective defense against voice-clone scams. Choose a word or short phrase that cannot be guessed from your social media. Not your pet's name, not your street, not your birthday, not your mother's maiden name. Agree on it in person with the people you trust most.
If a relative calls with an emergency, ask for the code word before you take any action. Do this even on a video call — deepfakes can replicate faces as well as voices.
“This is the one step that would defeat 90% of the grandparent scams we see,” Kaylor told CBS 17. “A real grandchild knows the word. A voice clone does not.”
Implementation tip: choose a word that is easy to remember but hard to guess. “Blue elephant” is better than “Fluffy” (the dog's name, which may be on social media). Test the code word by asking someone who knows you well to guess it. If they can, it is not strong enough.
2. Audit what you publish before a scammer does
Kaylor's specific warning: every public video, voice clip, and family photo is potential training data.
“Think about how many videos you're putting out there. Think about the information you're putting out there and whether or not that could be used against you at some point or those around you,” Kaylor said.
- Set Instagram, TikTok, and Facebook accounts to private. Public profiles give scammers unlimited access to your voice and image.
- Avoid posting children's voices in public videos. A child's voice can be cloned just as easily as an adult's, and scammers use children's voices to target grandparents.
- Review your public posts from the perspective of a scammer. If you can hear enough of your own voice to recognize it, so can a voice-cloning model.
- Consider removing voicemail greetings that include your name. A generic “please leave a message” is safer than “Hi, this is [name], I can't take your call right now.”
3. Treat any inbound request for a wire or payment as a deepfake until proven otherwise
Kaylor's warning about AI-generated business emails is not hypothetical. The old tells — grammar errors, awkward phrasing — are gone. The new tells are contextual: does the email ask for something unusual, create urgency, or come from an address that is slightly off from the real one? Any email asking for a wire transfer, a payment instruction change, or a fast favor should be verified on a phone number you already have, not a number supplied in the email or by the caller. For businesses, require multi-person approval for any wire transfer regardless of how convincing the request appears.
4. Use the call-back rule
If anyone — a relative in distress, a bank fraud agent, a government representative, or a utility company — asks you to act inside the call you are already on, hang up and call them back on a number you already know.
- For a relative: call the number you have saved for them, not the number that just called you.
- For a bank: call the number on the back of your card.
- For a government agency: look up the official number on their .gov website.
- For a utility company: call the number on your latest bill.
The call-back rule defeats real-time voice cloning because the scammer cannot answer at the legitimate number. If you call your daughter's real number and she answers, you know the earlier call was a fake.
5. Slow down before sending money
Urgency is the single most reliable signal that you are being scammed. Scammers create urgency because a panicked person does not think clearly.
- “Your account will be frozen in one hour.”
- “I need bail money before the court closes.”
- “The IRS will issue a warrant if you don't pay immediately.”
- “The puppy will be given to someone else if you don't pay now.”
Real emergencies will survive a ten-minute pause. A scam will not. Take the pause. Call a family member. Call the bank. Call the police non-emergency line. A legitimate urgent situation will still be urgent in ten minutes.
6. Report to the FBI's IC3
If you or someone you know has been targeted or victimized by an AI voice-clone or deepfake scam, report it at IC3.gov. The FBI's Charlotte, San Francisco, and New York field offices have made AI-enabled fraud a 2026 enforcement priority. Include in your report: the phone number that contacted you, the date and time of contact, the content of any messages or voicemails, the name of anyone they claimed to be, any payment information, and any recordings or screenshots. Even if you did not lose money, your report helps the FBI build a case and could be the missing piece that identifies a pattern across multiple victims.
7. Scan it with AuthentiLens
When you receive a suspicious message, audio clip, image, or video, run it through AuthentiLens before you reply, forward it, or act on it.
- Paste the text to flag scam language patterns, urgency cues, and AI-generation signals.
- Upload the image to detect AI-generated profile photos and fabricated documents.
- Upload the audio to analyze voice-clone artifacts, including unnatural frequency patterns, inconsistent breathing, and synthesized speech markers.
- Upload the video to detect deepfake face-swaps, lip-sync mismatches, and temporal inconsistencies.
You get 5 free scans to start. AuthentiLens Pro is $9.99 per month for unlimited scans across every content type.
Part Six: What comes next
The FBI's warning is not the end of the story. It is the beginning of a new phase in the fight against fraud.
The detection arms race
As voice cloning improves, detection technology must improve faster. The good news is that detection is keeping pace, for now. Companies including AuthentiLens are developing tools that identify synthetic speech and video with high accuracy. But the arms race is asymmetric: fraudsters only need to succeed once. Defenders need to succeed every time. Fraudsters are already investing in evasion techniques designed to fool detection algorithms.
The regulatory response
The FBI's warning is likely to be followed by new regulations requiring disclosure of AI-generated content in calls, messages, and videos. Several states are considering deepfake disclosure laws. The FCC has opened a proceeding on AI-generated robocalls. But regulation takes time. Scammers do not wait for Congress.
The individual response
The most important response is individual. Families who set code words will not lose money to voice-clone scams. People who audit their public social media posts will make themselves harder to clone. Consumers who slow down and verify will not be rushed into sending money.
The FBI's warning is clear: AI deepfake scams have skyrocketed, and voice cloning leads. But the tools to defend against them exist. The question is whether people will use them before they become a victim.
Sources
- CBS 17 Investigates: The rise of AI deepfakes and scams — CBS 17 (WNCN) Raleigh
- Use of AI deepfakes in scams on the rise — CBS 17 (YouTube)
- Senior US Officials Impersonated in Malicious Messaging Campaign (PSA) — FBI Internet Crime Complaint Center
- FBI warns senior US officials are being impersonated using texts, AI-based voice cloning — Cybersecurity Dive
- Scammers are using AI to impersonate senior officials, warns FBI — Malwarebytes
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
