FBI: AI Scams Drained $893M From Americans in 2025
The FBI's 2025 Internet Crime Report broke out AI-enabled fraud for the first time in the bureau's 25-year cybercrime-reporting history, and the opening figure was $893 million in direct losses across 22,364 complaints.

What happened
For 25 years, the FBI's Internet Crime Complaint Center (IC3) has published an annual report documenting the scale of cybercrime in America. The report has tracked the rise of phishing, the explosion of ransomware, and the epidemic of business email compromise. Until this year, one category had never appeared in its pages: artificial-intelligence-enabled fraud.
That changed in April 2026, when the FBI released the 2025 IC3 Annual Report. For the first time in the bureau's quarter-century of cybercrime reporting, AI scams were broken out as their own distinct category. The opening figure was $893 million in documented direct losses, across 22,364 complaints.
That number is almost certainly a dramatic undercount. The FBI only counts cases where victims file a complaint, and where the victim or the investigator can definitively prove that AI was involved in the scam. As voice clones grow indistinguishable from real voices and deepfake videos become impossible to flag with the naked eye, the “prove it was AI” bar gets higher every month.
The big picture: cybercrime losses hit a record $20.9 billion
Before zooming in on the AI-specific numbers, it is worth understanding the overall landscape documented by the IC3 in 2025. The FBI received 1,008,597 complaints last year, the first time the annual volume crossed the one-million threshold. Reported losses totaled $20.9 billion, a steep increase from $16.6 billion in 2024, roughly a 26% year-over-year jump in dollar losses against a 17% rise in complaint volume.
Americans lost more money to internet crime in 2025 than the GDP of more than 50 countries. The average loss per complaint was just over $20,700, though that average is pulled upward by a relatively small number of catastrophic losses (including the $243 million Bitcoin heist covered elsewhere on AuthentiLens).
Within this massive total, five crime types dominated the dollar volume:
- Investment scams (including crypto fraud): $6.8 billion
- Business email compromise (BEC): $3.2 billion
- Tech support scams: $1.1 billion
- Romance scams: $892 million
- AI-enabled fraud (new category): $893 million
The near-perfect dollar match between romance scams and AI-enabled fraud is coincidental but instructive. As we will see, AI is increasingly being used to supercharge romance scams: creating fake identities, generating convincing backstories, and even conducting real-time video calls using deepfake face-swap technology.
AI scams by the numbers: $893 million across 22,364 complaints
The headline figure in the IC3's AI section is 22,364 complaints with a confirmed AI nexus, representing $893 million in direct losses. The report makes clear that this is not a single, uniform fraud type. AI is being deployed as an enabling technology across multiple existing crime categories. The FBI breaks the AI total into two primary subcategories.
Investment scams with an AI component: $632 million
Investment fraud accounted for the overwhelming majority of AI-related losses. According to the IC3, scammers are using AI in three specific ways to supercharge traditional “pig butchering” and fake investment schemes:
- AI-generated conversation scaling. The FBI writes that subjects use AI to enhance their conversations with potential victims, allowing the scammers to quickly generate thousands of conversations that appear different to each prospective victim. Where a human scammer could manage perhaps five to ten active chats simultaneously, AI-powered systems can maintain hundreds, each one personalized, grammatically correct, and emotionally calibrated.
- Deepfake celebrity endorsements. The report specifically calls out that “investment clubs employ AI-generated videos and voices of celebrities, CEOs, or trusted figures to create fraudulent, high-stakes opportunities.” These are not poorly dubbed videos. The best examples use real-time face-swapping and voice cloning to make it appear that Elon Musk, Warren Buffett, or a trusted financial news anchor is personally endorsing a fake trading platform.
- Fake identification documents. AI image generators can now produce driver's licenses, passports, and utility bills that pass casual inspection. Scammers use these to create verified-looking accounts on cryptocurrency exchanges, social media platforms, and dating apps, building credibility before the first investment pitch.
Business email compromise with AI-generated content: $30 million
The second-largest AI-related loss category is business email compromise (BEC), attacks where scammers impersonate executives, vendors, or employees to trick companies into wiring funds or sharing sensitive data.
AI is making BEC attacks dramatically more effective in two ways. First, AI-generated text eliminates the grammatical errors and awkward phrasing that once made phishing emails relatively easy to spot. Second, scammers are now using AI-generated audio and deepfake video to supplement email attacks. The report notes a growing number of cases where an employee received a convincing email from their “CEO,” followed by a voice call from a cloned version of the chief executive.
The $30 million figure attributed directly to AI-enhanced BEC is likely a severe undercount. Many BEC attacks now involve AI without leaving obvious traces, making it difficult for victims or investigators to definitively classify them as “AI-enabled.”
Other AI-enabled fraud categories
The report also notes smaller but growing AI contributions to:
- Tech support scams: AI-generated voice menus and chatbots that mimic legitimate support lines.
- Romance scams: AI-generated profile photos (often indistinguishable from real people) and AI conversation scripts that can sustain long-term relationships with dozens of victims simultaneously.
- Government impersonation scams: AI voice clones of law enforcement officers or tax collectors demanding immediate payment.
How scammers are actually using AI in 2025
The IC3 report is a statistical document, not a tactical field guide. But the pattern that emerges from the 22,364 complaints, combined with investigative reporting from PYMNTS and SecureWorld, paints a clear picture of how AI is being weaponized in the current threat environment.
Tactic 1: Real-time voice cloning on phone calls
Voice cloning used to require minutes of high-quality audio and significant processing time. In 2025, real-time voice cloning is available as a consumer-grade product. A scammer needs as little as three seconds of a person's voice, easily harvested from a TikTok video, a voicemail greeting, or a social media story, to generate a live, interactive clone.
The attack works like this: the scammer calls a victim using a spoofed number that appears to belong to their adult child. The voice on the other end says, “Dad, I've been in an accident. I need bail money wired right now.” The victim hears their child's voice, complete with the right cadence and emotional inflection. Many victims report that the voice clone even cried or sounded panicked.
Tactic 2: Deepfake video calls for corporate fraud
The most famous deepfake corporate fraud case remains the 2024 Arup incident, in which a Hong Kong subsidiary of the British engineering firm Arup was tricked into wiring $25 million after fraudsters used deepfake video to impersonate the company's CFO on a Zoom call with multiple employees. Multiple employees saw what they believed was their CFO, heard his voice, and authorized the transfer.
The IC3 report suggests that these attacks are no longer one-off curiosities. BEC with deepfake video is now a documented, recurring fraud category. The FBI advises companies to implement “multifactor authentication for verbal and video requests” , essentially, requiring a second verification channel (such as a call back to a known number) before approving any wire transfer, regardless of what a video call appears to show.
Tactic 3: AI-generated investment “communities”
The $632 million in AI-enhanced investment fraud is driven largely by fake trading groups, WhatsApp investment clubs, and Telegram channels that appear to be bustling communities of successful traders. In reality, every single “member” of the group, except the victim, may be an AI chatbot.
These AI-powered fake communities can:
- Post realistic trade confirmations showing profits.
- Ask intelligent questions about the “strategy.”
- Thank the moderator for recent “wins.”
- Create peer pressure to invest more.
- Simulate customer support interactions when the victim tries to withdraw funds.
The victim believes they are part of a legitimate, thriving investment community. They are actually in a conversation with 47 bots and one scammer.
Why it matters
Every analyst who covered the IC3 report's release made the same observation: the real toll of AI-enabled fraud is far higher than $893 million. There are three reasons for the undercount.
- Underreporting. IC3 only sees cases where victims file a complaint. The FBI estimates that fewer than 20% of cybercrime victims actually report their losses. Many feel ashamed, believe they will never recover their money, or do not know how to file a complaint.
- Attribution difficulty. For a loss to be classified as “AI-enabled,” the victim or investigator must prove that AI was used. A perfectly cloned voice leaves no obvious forensic signature. A deepfake video that fools the human eye may not be flagged as synthetic unless someone runs it through detection software. Many AI scams are simply classified as investment fraud, BEC, or romance scams because the AI component was never identified.
- Partial AI involvement. If a scammer uses AI to write the initial email but then completes the fraud over a normal phone call, does that count as “AI-enabled”? The IC3's methodology for answering that question is not fully detailed in the public report, suggesting that the $893 million figure is a conservative floor.
The most important takeaway is not the exact dollar amount. It is that AI fraud has crossed the threshold from “emerging threat” to “major category” in the federal government's official accounting.
What the FBI's report means for the future
The 2025 IC3 report is a watershed document. For the first time, the federal government has said, in writing and with numbers attached, that AI-powered fraud is not a future hypothetical but a present reality costing Americans nearly a billion dollars a year, in documented losses alone.
What comes next is predictable. As AI voice cloning becomes free and ubiquitous, phone scams will become harder to detect. As deepfake video improves, video verification will become less reliable. As AI-generated text becomes indistinguishable from human writing, phishing emails will stop looking like phishing emails.
The only durable defense is to change the default assumption. The old rule was “trust your senses.” If you saw someone's face and heard their voice, you could reasonably believe they were who they claimed to be. That rule is now dangerous. The new rule is: trust only what you can verify through an independent channel.
That is the world the FBI's $893 million figure describes. It is not a warning about what might happen. It is a receipt for what already has.
How to protect yourself
The FBI's report does not include specific consumer protections, but the tactics it documents point directly to five concrete defenses. AuthentiLens recommends adding these to your security routine immediately.
- Treat any “celebrity endorsement” of an investment as fake until proven real. The FBI explicitly calls out AI-generated videos and voices of celebrities, CEOs, and trusted figures as a dominant tactic in 2025. If you see a video of a public figure promoting a “quantum trading platform” or a Bitcoin giveaway, assume it is a deepfake until you verify it on the celebrity's official, verified account or their company's website. If it is not there, it is not real.
- Set a family code word and use it for emergency calls. Voice clones can reproduce a person's voice from a few seconds of audio pulled from social media. They cannot know a secret word that you and your family members agreed on in person. Choose a code word that is not guessable from social media (not your pet's name, not your street). If a relative calls with an emergency, ask for the code word before taking any action. Do this even on video calls, deepfakes can replicate faces as well as voices.
- For any suspicious request from “your boss,” call back on a verified line. BEC with AI deepfake video is now a documented fraud category. If you receive an email, voice call, or video call from a senior executive asking for a wire transfer, gift card purchase, or sensitive data transfer, do not act on the request. Hang up or close the chat. Then call the executive back using a phone number you already know, from the company directory, not from the message you just received. A 30-second verification call defeats every deepfake.
- Verify investment pitches through regulated channels. If a trading platform, investment group, or cryptocurrency opportunity exists only in a direct message, a Telegram group, or a video shared on WhatsApp, it is almost certainly a scam. Real investment opportunities are registered with regulators (the SEC, FINRA, or state securities boards). They have websites you can verify. They do not require you to send cryptocurrency to a stranger's wallet.
- When something feels off, scan it with AuthentiLens. You are not expected to become a deepfake detection expert. That is what AuthentiLens is for. When you receive a suspicious message, photo, video, or caller, paste it into AuthentiLens. Our AI detection engine flags signs of synthetic generation, voice cloning, and social-engineering pressure in seconds, before you reply, click, or pay.
Sources
- Cryptocurrency and AI Scams Bilk Americans of Billions — Federal Bureau of Investigation
- 2025 IC3 Annual Report (PDF) — FBI Internet Crime Complaint Center
- FBI Flags $893 Million in AI-Driven Scams — PYMNTS
- FBI: AI-Enabled Fraud Topped $893M in 2025, Real Toll Likely Far Higher — SecureWorld
- Cyber Fraud Cost Americans $17 Billion in 2025, AI Scams Make List — Security Boulevard
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
