Equifax: AI Fraud Is Now Industrialized, Consumers Lost $15.9B
Equifax is warning that AI-generated synthetic identities, deepfakes, and voice-cloned executives have industrialized business fraud, with U.S. consumer losses hitting a record $15.9 billion in 2025.

What happened
A complete synthetic identity, a real Social Security number stitched to a fabricated name, address, and plausible credit history, now sells on underground marketplaces for about $5. A criminal-tuned large language model, purpose-built to write scam scripts and phishing emails, runs between $30 and $200 a month. On the defense side, the cost of being wrong is now measured in tens of billions.
Equifax, one of the three major U.S. credit bureaus, is warning that this asymmetry has pushed fraud past a tipping point, and that no organization accepting a digital payment is safely outside the target set.
In an April 2026 report by TheStreet, Equifax executives and FBI officials laid out the scope of the threat in stark terms. Americans lost a record $15.9 billion to consumer fraud in 2025, up from $12.5 billion in 2024, a 28% single-year jump and a 430% increase since 2020, according to Federal Trade Commission data. Seventy-nine percent of U.S. organizations reported attempted or actual payment fraud in 2024, according to the Association for Financial Professionals' annual survey.
Equifax's own data attributes 50% to 70% of credit fraud losses to synthetic identity fraud, hybrid identities that combine real Social Security numbers with fabricated names and employment histories to open accounts that look legitimate on paper. These synthetic identities are not detected by traditional fraud systems because they do not match a single known fraudster. They appear as new customers. They build credit over time. And then they bust out.
“We are in an arms race. Fraud has become one of the most rapidly evolving threats facing financial institutions today.”
Mark Begor, chief executive of Equifax, in remarks to investors, October 2025
The consumer toll: $15.9 billion in losses
According to Federal Trade Commission data cited in the report, Americans lost a record $15.9 billion to consumer fraud in 2025. That represents a 28% increase from the $12.5 billion lost in 2024 and a staggering 430% increase from 2020, when losses totaled approximately $3 billion.
The FTC's Consumer Sentinel Network, which collects complaints directly from victims, has documented this acceleration across nearly every fraud category:
| Year | Reported Losses | Year-over-Year Increase |
|---|---|---|
| 2020 | ~$3.0 billion | — |
| 2021 | ~$5.8 billion | 93% |
| 2022 | ~$8.8 billion | 52% |
| 2023 | ~$10.0 billion | 14% |
| 2024 | $12.5 billion | 25% |
| 2025 | $15.9 billion | 28% |
The 2025 figure represents an average loss of approximately $44 million per day, every single day of the year.
The business toll: 79% of organizations hit
The Association for Financial Professionals' annual payment fraud survey found that 79% of U.S. organizations reported attempted or actual payment fraud in 2024. That is up from 65% in 2020.
The most common attack vectors:
- Business email compromise (BEC): Scammers impersonate executives or vendors to trick employees into wiring funds or changing payment instructions. The FBI's IC3 logged $2.77 billion in BEC losses across more than 21,000 incidents in 2024.
- Invoice fraud: Fake invoices that look nearly identical to legitimate vendor invoices, often generated by AI and sent from spoofed domains.
- Synthetic identity fraud: Fake customers who open accounts, build credit, and then max out lines of credit before disappearing.
The synthetic identity problem: 50% to 70% of credit fraud losses
According to Equifax's own analysis, 50% to 70% of credit fraud losses are now attributable to synthetic identity fraud. This is not a typo. The majority of credit fraud in the United States no longer involves stolen identities of real people. It involves identities that never existed at all.
A synthetic identity typically combines:
- A real Social Security number, often belonging to a child, an elderly person who does not use credit, or a deceased individual
- A fabricated name, often generated by AI to avoid detection
- A fabricated address, often a real address where the fraudster does not live
- A fabricated employment history, built using stolen employer identification numbers or fake company registrations
The synthetic identity is used to open a credit card, a mobile phone account, or a buy-now-pay-later loan. The fraudster makes small purchases and pays them off for months or years, building a strong credit score. Then, they “bust out,” maxing out all available credit and disappearing.
Traditional fraud detection systems do not flag synthetic identities because they do not match any known fraudster. They appear to be new, legitimate customers building credit for the first time.
The AI industrialization: $5 synthetic identities, $30 criminal LLMs
What has changed in the past two years is not the concept of fraud but the unit economics of fraud. AI has collapsed the cost of attack. For the playbook of how this scales globally, see our case file on Chen Zhi and the $15 billion Prince Group pig-butchering empire.
Synthetic identity kits: $5. According to research cited by Equifax and security firms like Vectra AI, a complete synthetic identity kit now sells for as little as $5 on underground marketplaces. For the price of a coffee, a fraudster can purchase a validated SSN, a fabricated name, a fabricated address, a burner phone number, a disposable email, and a fabricated employment history with fake employer phone numbers staffed by co-conspirators. These kits are sold in bulk. A fraudster can buy 1,000 synthetic identities for $5,000, open accounts with 10% of them, and still make a massive profit.
Criminal-grade LLMs: $30 to $200 per month.General-purpose AI models like ChatGPT and Gemini have guardrails that prevent them from generating scam scripts, phishing emails, or malicious code. The underground market has responded with purpose-built criminal LLMs. Researchers at Vectra AI have documented models like WormGPT 4 (advertised as a “hacking assistant”), KawaiiGPT (free on GitHub with roughly 500 contributors), and FraudGPT (subscription-based, trained specifically on scam scripts and social-engineering templates). One provider, advertising on a dark web forum, claimed their model could produce 2,500 unique phishing emails per hour, each one personalized based on scraped social media data.
AI-generated phishing: 4X higher click-through rates.Traditional phishing emails often contain telltale signs: grammatical errors, awkward phrasing, generic greetings. AI-generated phishing emails do not. According to research cited by multiple security firms, AI-generated phishing emails produce click-through rates roughly four times higher than human-written versions, and are significantly harder for automated detection systems to flag.
The deepfake explosion: 500,000 to 8 million. The number of deepfakes online has grown from an estimated 500,000 in 2023 to 8 million by the end of 2025, a 900% annual increase, according to tracking by Sumsub and the World Economic Forum. These include face-swapped celebrity endorsements, voice-cloned executive wire authorizations, real-time deepfakes used in live video calls, and AI-generated identification documents that pass basic visual inspection.
Why it matters
The Equifax warning was accompanied by stark language from FBI field investigators, and a projection from Deloitte that the trajectory is not flattening anytime soon.
“As technology continues to evolve, so do cybercriminals' tactics. Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike. These sophisticated tactics can result in devastating financial losses, reputational damage, and compromise of sensitive data.”
Robert Tripp, special agent in charge of the FBI's San Francisco field office
The FBI's Internet Crime Complaint Center (IC3) logged more than 1 million complaints in 2025, with total losses of $20.9 billion, the highest figures in the report's 25-year history. Of that, $893 million was directly attributed to AI-enabled fraud, a category that did not exist in the report until this year.
The projection: $40 billion by 2027
According to a May 2024 report from Deloitte's Center for Financial Services, U.S. AI-driven fraud losses are projected to reach $40 billion by 2027. Deloitte's analysis identified several factors driving the acceleration: lower barriers to entry, increased scale (one fraudster can now do what previously required a team of dozens), improved quality of AI-generated content, and faster adaptation to new detection techniques. Deepfakes alone, which accounted for a small fraction of fraud losses in 2022, are expected to grow to approximately $12 billion in annual losses by 2027, driven primarily by CEO impersonation and synthetic identity fraud.
Equifax's response: fighting AI with AI
In January 2026, Equifax launched a new product called Equifax Synthetic Identity Risk. According to the company's press release, the product uses machine learning to analyze hundreds of attributes, including identity linking, identity velocity, and behavior patterns, to detect synthetic identities that traditional systems miss. It returns a risk score that indicates the likelihood that an applicant is a synthetic identity rather than a legitimate new customer.
Equifax has also partnered with Incode, an identity verification platform, to offer document verification and biometric checks. The partnership allows Equifax clients to verify government-issued IDs against document templates and security features, perform liveness detection to ensure that a photo or video is of a real person rather than a deepfake or printed photo, and match biometrics to ID documents. The subtext is that single-factor verification, a Social Security number alone, a name alone, an address alone, is no longer sufficient. Multi-factor identity verification is the new baseline.
Why this matters for small businesses
The Equifax warning is directed primarily at financial institutions, but the implications reach every business that accepts payments, extends credit, or processes invoices.
1. Your vendors might not be real.
Synthetic identity fraud is not limited to consumer credit. Fraudsters are also creating fake vendor accounts, submitting invoices for goods or services that do not exist, and collecting payments before anyone notices that the vendor has no physical presence, no employees, and no tax records. The Association for Financial Professionals found that 69% of organizations that experienced payment fraud in 2024 were targeted through the vendor onboarding process. A fake vendor with a synthetic identity and an AI-generated website can look legitimate for months.
2. Your CEO's voice can be cloned from a podcast.
Voice-cloning technology can recreate a person's voice from as little as three seconds of audio. If your CEO has ever appeared on a podcast, a YouTube video, an earnings call, or a social media clip, their voice can be cloned. Fraudsters use these clones to call finance departments and authorize urgent wire transfers. The employee on the phone hears their CEO's voice, with the right cadence and emotional inflection. Many victims report that the voice clone even sounded stressed or panicked, exactly as you would expect in an emergency. (For the consumer-facing version of this attack, see our guide on how to tell if audio is AI-generated and our checklist on how to tell if a phone call is a scam.)
3. Your email security is no longer enough.
Traditional email security filters are designed to catch known phishing indicators: suspicious domains, malicious links, and attachments. AI-generated phishing emails often contain none of these. They come from compromised legitimate accounts or lookalike domains. They contain no obvious malware. They simply ask the recipient to update a vendor's banking information or approve an invoice. (For the attachment side of this same attack, see our guide on how to tell if an email attachment is suspicious.) The FBI's IC3 logged $2.77 billion in BEC losses across more than 21,000 incidents in 2024. The average loss per incident was approximately $132,000. Small businesses are disproportionately targeted because they often have fewer controls in place.
How to protect yourself
The AuthentiLens editorial team has distilled the Equifax warning and the broader threat landscape into six concrete protections for businesses.
1. Set a verbal code word for any wire transfer or banking change
If an executive, vendor, or family member requests an urgent payment on a video or voice call, require a pre-agreed verbal password before funds move. A real-time deepfake of your CEO cannot know the word. This single rule defeats almost every CEO-impersonation wire fraud we cover.
2. Treat any email asking to change banking or payment instructions as suspect by default
Do not act on the email. Do not call the number in the email. Call the vendor back on a number you already have on file, from your accounting system, not from the email, and confirm the change before your accounts payable team updates a single record.
3. Require multi-person approval for wire transfers above a preset threshold
A single compromised inbox should never be able to move six figures. Implement a policy requiring two approvals for any wire transfer above a threshold that makes sense for your business: $5,000, $10,000, or $25,000. The second approver should be someone who does not have access to the first approver's email.
4. Verify new vendors and customers against out-of-band data
A business claiming an address should have a working phone number at that address and tax records that match. Treat a clean-looking LinkedIn page or website as marketing, not verification. Ask for a W-9. Call the business's published phone number (not the one they gave you). Verify that the person you are speaking with actually works there.
5. Assume any video call of someone you have not met in person could be a deepfake
Real-time face-swap and voice-clone tools are now commercially available. If you are on a video call with someone you have never met in person, a new vendor, a potential hire, a “bank representative,” be skeptical. Verify through an independent channel: a return call to a known number, a shared-history question, or an in-person meeting.
6. Scan suspicious invoices, emails, and profiles with AuthentiLens
You are not expected to become an AI-detection expert. That is what AuthentiLens is for. When you receive a suspicious invoice, email, message, or social profile, scan it with AuthentiLens. Our detection engine flags AI-generation signals, impersonation patterns, and social-engineering urgency in seconds, before you approve payment, update a vendor record, or reply.
AuthentiLens is a verification tool that helps you detect AI-generated content, deepfake media, impersonation scams, and synthetic identities. Upload a suspicious email, invoice, video, or profile, and we will tell you what is real and what is not.
Sources
- Equifax exposes AI fraud threats targeting business — TheStreet
- Equifax Introduces Enhanced Synthetic Identity Fraud Detection — PR Newswire / Equifax
- Synthetic Identity Fraud: The Unseen Threat and Its Cost to Businesses — Equifax
- Equifax Launches AI-based Synthetic Identity Fraud Detection as Companies Turn to AI — Digital Transactions
- An Added Layer of Fraud Prevention from Equifax: Document Verification and Biometric Checks with Incode — Equifax Newsroom
Stay ahead of the next scam
One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.
By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.
