AI Voice Cloning Drives Social Media Scams to $2.1B in 2025
Americans lost $2.1 billion to social media scams in 2025, eight times the 2020 figure, with AI voice cloning now leading the threat list, according to FTC data released this month.

What happened
In 2020, Americans reported losing about $258 million to scams that began on social media. In 2025, they reported losing $2.1 billion, eight times that figure, according to data the Federal Trade Commission released on April 27, 2026. More than half of those losses came from investment scams delivered through Facebook ads and direct messages. And the most common AI mechanism behind the surge is voice cloning, the ability to replicate a person's voice from as little as three seconds of audio harvested from a public social media post.
The FTC's April 27 release
On April 27, 2026, the Federal Trade Commission released two parallel documents: a formal press release announcing new social media scam data, and a detailed Data Spotlight visualization breaking down the numbers by platform, scam type, and demographic group.
The headline figures are stark:
- $2.1 billion in reported losses from social media scams in 2025
- Up from $258 million in 2020, an eightfold increase
- Nearly 30% of all consumers who lost money to a scam in 2025 said the contact began on social media
- $1.1 billion of the total came from investment scams promoted through social media ads, direct messages, and group chats
- 60% of romance scam victims said the fraud started on a social media platform
Platform breakdown
According to the FTC's Data Spotlight, the largest single scam-origin platform was Facebook, followed by WhatsApp and Instagram. All three are owned by Meta, meaning a single company's platforms account for the vast majority of social media scam origins.
The FTC noted that scammers are using Facebook's advertising tools to target specific demographics with investment fraud ads. They are using WhatsApp's end-to-end encryption to carry on long-term romance scams without detection. And they are using Instagram's direct messaging to establish trust with potential victims before moving them to WhatsApp or Telegram.
Demographic patterns
The FTC data also revealed significant demographic patterns in social media scams:
- Adults aged 18 to 29 were more likely to report losing money to social media scams than older adults, but older adults lost more money per incident.
- Investment scams disproportionately targeted men aged 25 to 40 through Facebook ads promising rapid crypto returns.
- Romance scams disproportionately targeted women over 50 through Facebook and Instagram direct messages.
- Hispanic and Black consumers reported higher rates of loss from social media scams than non-Hispanic white consumers.
The FTC's warning
In the press release accompanying the data, the FTC's Bureau of Consumer Protection issued a direct warning:
“Scammers are using the very tools that connect us, social media, to disconnect us from our money. The platforms have made it easier than ever to reach millions of people instantly, but that same ease of access is being weaponized by fraudsters. Consumers should treat any unsolicited message about money, investing, or emergencies as highly suspect, regardless of how legitimate the profile looks.”
The FTC also noted that the $2.1 billion figure is almost certainly an undercount. The Consumer Sentinel Network only captures reported losses, and fraud is notoriously underreported, especially for romance scams, where victims may feel shame or embarrassment.
The ABC30 report: “Don't think you're too savvy to be scammed”
On April 29, 2026, two days after the FTC released its data, ABC30 Fresno (KFSN-TV) aired a segment by reporter Vanessa Vasconcelos titled “Watching Your Wallet: How to avoid AI-generated scams.” The segment paired the FTC's national data with an unusually candid expert source.
Ron Kerbs, founder and CEO of the online-safety platform Kidas, told Vasconcelos that even he, the head of a company that builds scam-detection technology, was nearly fooled by an AI voice-cloning scam.
“I run a scam detection company, and I was almost scammed three weeks ago,” Kerbs said.
The airline customer service trap
Kerbs explained that he had searched online for a customer-service phone number after his flight was canceled. The top result in his search engine was a phone number, but it was not the airline's real number. It was a number set up by scammers to intercept exactly that traffic.
“They set it up intentionally to scam people like me who would call those numbers in order to change their flight or in order to get a refund,” Kerbs told Vasconcelos.
This is a known attack vector called search engine poisoning, or SEO poisoning. Scammers use content farms and AI-generated websites to rank for high-value search terms such as an airline's customer service line or a bank's fraud department. The phone numbers they list route to call centers staffed by scammers, or increasingly to AI voice bots that handle the initial conversation before extracting account information or funds.
The voice cloning threat
Kerbs told ABC30 that voice cloning is now the most common AI mechanism in social media and phone scams.
“A three-second clip from a TikTok or Instagram video is enough to clone a voice,” Kerbs said. “Scammers are using those clones to call grandparents pretending to be grandchildren in distress, to call finance departments pretending to be CEOs authorizing wire transfers, and to call customer service lines pretending to be account holders.”
The ABC30 segment included a live demonstration of voice-cloning technology. Vasconcelos noted that the cloned voice was “almost indistinguishable” from the original speaker.
Kerbs's advice to viewers
Kerbs offered three direct pieces of advice to ABC30 viewers:
- “Don't think that you're too savvy to be scammed.” The scams are sophisticated, and the people running them are professionals. No one is immune.
- “Take your time, do your research and don't let anyone rush you into making a financial decision.” Urgency is the single most reliable signal that you are being scammed. Real emergencies will survive a ten-minute pause.
- “If something feels off, stop. Verify through a different channel.” If a family member calls in distress, call them back on the number you already have. If a bank calls about fraud, hang up and call the number on the back of your card.
Why it matters
The FTC's $2.1 billion figure and the Kerbs interview are not abstract data points. They document a structural shift in how fraud works: AI voice cloning has lowered the barrier to a class of attack that once required access to state-level technology, and social media has become the distribution channel that makes those attacks cheap to scale.
What AI voice cloning is and how it works
AI voice cloning is a type of generative artificial intelligence that learns to replicate a specific person's voice from a small sample of audio. The process works in four stages.
- Data collection: The scammer harvests a short audio clip of the target's voice from social media videos, voicemail greetings, podcast appearances, or public speeches. As little as three seconds of audio can be enough for current models.
- Model training: The clip is fed into a voice-cloning model, which analyzes the unique characteristics of the voice: pitch, rhythm, accent, pronunciation patterns, and emotional range.
- Speech generation: Once trained, the model can generate new speech in the target's voice. The scammer types a script and the model produces an audio file of the target saying exactly those words.
- Real-time synthesis: Advanced tools can operate in real time, allowing scammers to conduct live phone calls using the cloned voice. The scammer speaks into a microphone, and the tool transforms their speech into the target's voice in milliseconds.
Why voice cloning is so dangerous
Voice cloning is dangerous for three reasons.
- The barrier to entry is near zero. Consumer-grade voice-cloning tools are available for free or for a few dollars per month. Some models run on a standard laptop.
- The results are highly convincing. Modern models can replicate emotional inflection, regional accents, and even characteristic speech patterns of the target.
- Voice is a primary trust signal. Humans are wired to trust voices we recognize. When a grandparent hears what sounds exactly like their grandchild's voice in distress, the emotional response overrides skepticism.
Real-time deepfake voice in live calls
The most sophisticated scams now use real-time voice cloning during live phone calls. The scammer does not prerecord a message. They speak into a microphone, and the AI transforms their voice into the victim's loved one in real time. This means the scammer can respond to questions naturally, adjust the emotional tone based on the victim's reactions, and extend the call for as long as necessary to extract money.
This level of sophistication was once available only to state actors and major criminal enterprises. Today it is available to anyone with a credit card and an internet connection.
What the $2.1 billion number actually means
The FTC's $2.1 billion figure represents reported losses from consumers who filed a complaint with the FTC's Consumer Sentinel Network. It includes investment scams that began with a Facebook ad or a WhatsApp direct message, romance scams that started with an Instagram message, e-commerce fraud on Facebook Marketplace, and grandparent voice-clone calls that started with a Facebook friend request.
The number is almost certainly a significant undercount. The FTC estimates that fewer than 10% of fraud victims actually file a complaint. Shame, embarrassment, and the belief that nothing can be recovered keep the vast majority of cases unreported. Industry analysts estimate the real total of social media scam losses, including unreported cases and those reported to other agencies, was likely between $5 billion and $7 billion for 2025.
That estimate is consistent with the FBI's IC3 report showing $20.9 billion in total cybercrime losses, and with Deloitte's projection of $40 billion in AI-driven fraud losses by 2027.
Why social media is the perfect scam vector
The FTC data confirms what researchers have long suspected: social media platforms are not just where scams happen. They are where scams thrive.
- Low cost, massive reach. Scammers can create hundreds of accounts for free and target specific demographics through Facebook's advertising tools for a few dollars per day.
- Built-in trust signals. Social media profiles provide a profile photo, a history of posts, mutual friends, and engagement metrics. Scammers can fabricate all of these using AI-generated images and bot networks.
- Private, unmonitored messaging. Once a connection is established, scammers move the conversation to Messenger, WhatsApp, or Telegram. These channels are encrypted and not monitored for scam content.
- Emotional targeting. Social media gives scammers access to personal information: relationship status, interests, recent life events, and employment history. They use this to tailor their approach to each specific target.
- Algorithm amplification. Social media algorithms maximize engagement. Emotionally charged scam content generates high engagement and is promoted organically, without the scammer paying for additional distribution.
How to protect yourself
The AuthentiLens editorial team has distilled the FTC's new data, the Kerbs interview, and our broader case-file research into six concrete protections. These steps work against voice-clone scams, investment fraud delivered through social media, and the broader pattern of AI-powered social engineering that the $2.1 billion figure documents.
1. Set a family code word for any call asking for money
This is the single most effective defense against voice-clone scams. Choose a word or short phrase that would not be guessable from social media: not your pet's name, not your street, not your birthday. Agree on it in person. If a relative calls with an emergency, ask for the code word before taking any action. Do this even if the call appears to be a video call, since deepfakes can replicate faces as well as voices.
“This is the one step that would defeat 90% of the grandparent scams we see,” Kerbs told ABC30. “A real grandchild knows the word. A voice clone does not.”
2. Never call a customer service number found through a search engine
Kerbs's near-miss involved exactly this attack: a poisoned search result for an airline support line. Scammers use AI-generated content farms to rank for high-value search terms. The phone numbers in those results route to scam call centers.
Instead, always navigate to the company's official website by typing the URL directly or going through the company's official app. The contact information on the official website is reliable. The contact information in search results may not be.
3. Treat any direct message offering investment coaching as a scam
The FTC's $1.1 billion investment-scam figure for 2025 is dominated by Facebook ads, wrong-number texts, and group-chat invites that offer to teach strangers how to invest in cryptocurrency.
There is no legitimate version of “I'll teach you how to invest” from a stranger online. Real financial advisors do not cold-message people on WhatsApp. Real investment platforms do not advertise through Facebook direct messages. If the offer starts in a direct message, it is a scam.
4. Slow down before sending any money
Urgency is the single most reliable signal that you are being scammed. “Your account will be frozen in one hour.” “The investment closes tonight.” “I need bail money before the court closes.”
Real emergencies will survive a ten-minute pause. A scam will not. Take the pause. Call someone you trust. Look up the institution directly. Then decide.
5. Verify any distress call through a separate channel
If your child or grandchild calls from an unknown number in distress, do not respond to the unknown number. Call the number you already have on file for that relative. If you cannot reach them, call another family member. A real emergency will be confirmed by a second call. A voice-clone scam will not.
6. Scan it with AuthentiLens
You are not expected to become an AI-detection expert. When you receive a suspicious text, social media profile, ad, video clip, or audio message, AuthentiLens can help.
- Paste the text into AuthentiLens. We will flag scam language patterns, urgency cues, and AI-generation signals.
- Upload the image into AuthentiLens. We will detect AI-generated profile photos, fabricated documents, and manipulated evidence.
- Upload the audio clip. We will analyze voice-clone artifacts including unnatural frequency patterns, inconsistent breathing, and synthesized speech markers.
If you have already been scammed
If you or someone you know has lost money to a social media scam, the most important step is to report it. Reporting does not just help you: it helps the FTC build cases against the operations running these scams.
- File a complaint with the FTC at ReportFraud.ftc.gov. Include the platform, the amount lost, and any account information you have for the scammer.
- File a complaint with the FBI's IC3 at ic3.gov. The FBI investigates large-scale and cross-jurisdictional fraud operations.
- Contact your bank or payment app immediately. They may be able to reverse the transaction or freeze the recipient's account.
- Tell your family. Shame is the scammer's greatest ally. You did nothing wrong. You were targeted by professionals using the most advanced tools available.
Sources
- Watching Your Wallet: How to avoid AI-generated scams — ABC30 Fresno (KFSN-TV)
- New FTC Data Show People Have Lost Billions to Social Media Scams — Federal Trade Commission
- Reported losses to scams on social media eight times higher than in 2020 — FTC Data Spotlight
- Consumers lost $2.1 billion to social media scams in 2025, FTC reports — TechCrunch
- Social media scams cost Americans more than $2.1 billion last year, according to the FTC — Tom's Guide
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
