News

    AI Is Making Scams Harder to Spot: 73% of Americans Targeted

    73% of Americans have been targeted by financial fraud, 40% in the past 12 months alone, and AI is making the scams harder than ever to spot, Bankrate finds.

    13 min readBy AuthentiLens Editorial
    An older person looking at a smartphone with a worried expression next to a fraud alert notification, illustrating AI-powered scam targeting

    What happened

    Seventy-three percent of Americans say they have been targeted by financial fraud at some point in their lives. Forty percent say it happened in the past twelve months, up from 34% a year earlier. And 52% now expect to be targeted in the future, up from 37% in 2025. That fifteen-point jump in a single year is the sharpest one-year swing in consumer fraud expectation that Bankrate has ever recorded.

    The numbers come from a Bankrate survey titled “A Rising Tide of Financial Fraud”, fielded in January 2026 with 1,007 U.S. adults. The survey was published in March 2026 and brought to a national television audience in a May 5 InvestigateTV segment that aired on Fox 8 Live (WVUE New Orleans) and dozens of other Gray Television stations across the country. The cause, according to Bankrate analyst Sarah Foster: artificial intelligence.

    The Lifetime Exposure: 73%

    Nearly three out of four Americans report having been targeted by a financial scam at some point in their lives. That is up from 68% in Bankrate's previous survey, a five-percentage-point increase in a single year.

    To put that in perspective: if you are an American adult, the statistical likelihood that a scammer has tried to reach you by phone, email, text, social media, or in person is now higher than the likelihood that you own a pet, have a college degree, or live in a suburban area.

    The Past-Year Exposure: 40%

    Perhaps more striking than the lifetime figure is the past-year figure: 40% of Americans say they have been targeted by a financial scam in the last twelve months. That is up from 34% in the previous survey, a six-point jump in a single year.

    This means that nearly half the country has been approached by a scammer within the last year. For many Americans, scam attempts are no longer rare events. They are a routine part of digital life.

    The Future Expectation: 52%

    Most concerning of all: 52% of Americans now expect to be targeted by a financial scam in the future. That is up from 37% in 2025, a fifteen-point jump in a single year. Sarah Foster, the Bankrate analyst who spoke with InvestigateTV for the May 5 segment, described this shift as a fundamental change in consumer psychology.

    “I think there's really something to be said here about the rise of AI, how that's not only making it easier for scammers to target people,” Foster told InvestigateTV. “It's also making it even harder to tell if a fraudster is even a fraudster.”

    In the Bankrate press release, Foster put the same point more concretely: “Fraudsters are getting more sophisticated thanks to artificial intelligence, and they're reaching more people than ever.”

    The Demographic Breakdown

    The Bankrate survey found significant demographic variation in fraud targeting:

    • Younger adults (18 to 34) were more likely to report being targeted via social media and text message.
    • Older adults (65+) were more likely to report being targeted via phone call and email.
    • Higher-income households ($100,000+) reported higher financial losses when targeted.
    • Lower-income households (under $50,000) were more likely to report being targeted repeatedly by the same scammer.

    The survey also found a significant knowledge gap by age. Younger adults were more likely to know that voice cloning and deepfake video calls exist. Older adults were less likely to have heard of these technologies. That knowledge gap is itself a vulnerability: scammers using AI are counting on the fact that many potential victims simply do not know the technology exists.

    Traditional Defenses Are Already in Place, and Failing

    The InvestigateTV segment noted that Americans are already taking standard defensive measures:

    • 84% use strong, unique passwords for financial accounts
    • 76% use two-factor authentication where available
    • 72% use a password manager
    • 68% say they are “very careful” about clicking links in emails

    Yet despite these measures, targeting rates continue to rise year over year. The defensive perimeter most consumers were taught to maintain was built for a pre-AI threat model. In a world where scam emails carry no grammatical errors, AI voices clone family members from a 30-second TikTok clip, and entire fake court hearings can be staged on Zoom, that perimeter is no longer sufficient. Foster's question, asked on national television: whether traditional defenses “will ever be enough.”

    The National Warning

    The InvestigateTV segment that brought the Bankrate survey to a national audience aired May 5, 2026, on Fox 8 Live and dozens of other Gray Television stations. The segment was also published in full text by KCBD and other Gray affiliates. Foster's commentary in the segment was direct: “Consumers need to be just as sophisticated in their defenses.”

    The specific defenses Foster recommended in the segment included using strong, unique passwords for every financial account; enabling two-factor authentication wherever available; being skeptical of any unsolicited contact asking for money or personal information; verifying through a separate channel before acting on any urgent request; and reporting suspected scams to the FTC and the FBI's IC3.

    Why it matters

    The Bankrate survey quantifies the problem. The reason targeting rates keep climbing while consumer defenses stay the same is that artificial intelligence has systematically eliminated the telltale signs that consumers were trained to spot.

    The End of the Grammar Tell

    For years, consumer protection advocates taught people to spot scam emails by looking for grammatical errors, awkward phrasing, and mismatched pronouns. Those tells worked because many scammers were operating in a second language.

    AI has eliminated that tell. Large language models can generate flawless prose in any language. The email that arrives in your inbox may be grammatically perfect, stylistically consistent, and contextually relevant. It may still be a scam. The same is true for text messages, social media DMs, and voicemail transcripts. The “wrong number” text that used to contain obvious errors is now flawless. It sounds like a native speaker. It sounds friendly and disarming.

    Our guide on signs of a phishing email has been updated to reflect this shift: checking for typos is no longer a reliable first-pass defense.

    The End of the Voice Tell

    Consumers were also taught to be suspicious of calls from unknown numbers but to trust calls from numbers they recognized. Voice cloning has eliminated that tell.

    The call that appears to come from your daughter's number may actually be a scammer using caller ID spoofing. The voice on the other end may be a real-time clone of your daughter's voice, generated from a 15-second TikTok video she posted last week. There is no “telltale robotic sound” anymore. Modern voice clones sound human because they are built from human voice data. They breathe. They hesitate. They use filler words. They laugh and cry. As AuthentiLens documented in our coverage of the Vancouver AI voice clone kidnapping scam, a father heard what he was certain was his daughter's panicked voice. It was not.

    The End of the Face Tell

    The same is true for video calls. Deepfake face-swap technology has advanced to the point where real-time video calls can be faked convincingly. The person on the video may have the right face, the right voice, the right background, and be an AI-generated composite.

    In the fake immigration court cases AuthentiLens covered earlier this week, scammers used deepfake video to impersonate judges wearing judicial robes. Victims saw what appeared to be a real courtroom, a real judge, a real proceeding. It was all fabricated.

    The End of the Urgency Tell

    Urgency remains a tell, but scammers are getting smarter about how they create it. The “your account will be frozen in one hour” script is still common, but more sophisticated scammers are using AI to create personalized urgency. A scammer might send a message referencing a real recent event in your life: “I heard about your mother's surgery. I'm so sorry. I found a financial assistance program that could help, but you need to apply today.”

    The AI scraped information about your mother's surgery from a family member's social media post. The urgency feels legitimate because it is tied to something real. The financial assistance program is a scam.

    The New Baseline: Verify Everything

    The traditional authenticity signals, including a name, a phone number, a voice, and a face on a video call, are now synthesizable at near-zero marginal cost. External, independent verification is no longer one defense among many. It is the only defense that still reliably works.

    This is what Foster meant when she told InvestigateTV that AI is making it “even harder to tell if a fraudster is even a fraudster.” The old tells are gone. The new tells require active, deliberate verification, which is difficult to do when a scammer is deliberately creating artificial urgency.

    The Trust Erosion Cycle

    The Bankrate survey's sharpest finding may be the jump in consumers who expect to be targeted in the future: from 37% to 52% in a single year. That fifteen-point increase suggests Americans are losing confidence in their ability to protect themselves. The pattern that drives this erosion:

    • Scammers adopt a new technology, such as voice cloning or deepfake video.
    • Consumers are not aware of the technology or its capabilities.
    • Scammers successfully defraud consumers who did not know the technology existed.
    • News reports educate consumers about the new threat.
    • Consumers realize their existing defenses are inadequate.
    • Consumer confidence in their ability to spot scams declines.

    The Bankrate survey suggests the United States is in the middle of this cycle right now. Consumers have learned about AI voice cloning, deepfake video, and AI-generated phishing. They have realized that the tells they were taught are obsolete. Their confidence has dropped as a result. Then scammers adopt an even newer technology, and the cycle repeats.

    The Generational Divide

    The survey found a significant generational divide in AI awareness. Younger adults were more likely to have heard of voice cloning and deepfake video. Older adults were less likely to be familiar with these technologies. This knowledge gap is dangerous. Scammers who use AI are counting on the fact that many potential victims, especially older adults, do not know the technology exists. A grandparent who has never heard of voice cloning is far more likely to believe that the panicked call from their “grandchild” is real.

    The Perfect Storm

    Foster described the current moment as a perfect storm for scammers. Three factors are converging simultaneously:

    • More people are online than ever before. The COVID-19 pandemic accelerated digital adoption, and those habits have stuck. More people shop, bank, and communicate online than ever before, which means more surface area for scammers to exploit.
    • Scammers have access to better tools. AI-powered fraud tools are widely available, cheap, and easy to use. A scammer does not need technical expertise to clone a voice or generate a phishing email. Anyone with a credit card and an internet connection can access these capabilities.
    • Consumers are exhausted. After years of pandemic-related stress, economic uncertainty, and political polarization, many consumers are simply tired. They are less vigilant. They are more likely to click without thinking.

    Financial institutions are investing in their own AI-powered fraud detection systems, but as Foster noted, these systems are not foolproof. Scammers adapt quickly. Detection systems are always playing catch-up. The FBI's 2025 IC3 report logged $893 million in AI-enabled fraud losses in 2025, the first year AI was broken out as its own category. The FTC's data showed $2.1 billion in losses tied to scams that began on social media. As the FBI's own agents have noted, as AuthentiLens reported in our coverage of the FBI Charlotte deepfake warning, AI-enabled scams have “just skyrocketed.”

    How to protect yourself

    The AuthentiLens editorial team has distilled the Bankrate survey, the InvestigateTV segment, and our broader coverage of AI scams into a new protection framework for the AI era. These six steps work together. No single step is sufficient on its own.

    1. Layer External Verification on Top of Every Traditional Defense

    Strong passwords, two-factor authentication, and password managers still matter. They are the foundation of good security hygiene. But they are no longer sufficient on their own.

    Build a habit of independent confirmation. Before you act on any urgent request from a relative, a bank, a government agency, or a CEO, verify through a separate, known channel.

    • If you receive an email from your CEO asking for a wire transfer, call the CEO on a number you already have.
    • If you receive a call from your daughter saying she has been kidnapped, hang up and call her on her known number.
    • If you receive a text from your bank saying your account has been compromised, call the number on the back of your card.

    The extra step takes sixty seconds. It defeats almost every AI-powered impersonation scam.

    2. Set a Family Code Word Right Now

    This is the single most consequential addition that every federal and state agency now recommends. Choose a word or short phrase that would not be guessable from social media. Agree on it in person with every member of your household.

    If a relative calls with an emergency, demand the code word before taking any action. A real-time deepfake of your child, parent, or spouse cannot know the word. The scam collapses.

    This advice is especially important when it comes to protecting elderly parents from scams. Older adults are disproportionately targeted by grandparent voice scams. A code word is simple, free, and effective.

    3. Treat Any Unexpected Message Asking for Money as Suspect by Default

    The old rule was to be suspicious of messages from unknown senders. The new rule is to be suspicious of any message asking for money, regardless of the sender.

    AI-generated phishing emails now carry no grammatical errors. The sender's name may be your CEO's name. The phone number may be your bank's phone number. The voice on the call may be your daughter's voice. Do not trust the surface evidence. Our updated guide on signs of a phishing email covers what to look for now that the grammar tells are gone.

    4. Hang Up and Call Back on a Number You Already Know

    This is the most reliable single defense against AI voice cloning. If someone calls you claiming to be from your bank, your credit card company, a government agency, or a relative in distress: hang up. Then call back on a number you already know, the number on the back of your card, the number in your contacts, or the number on the agency's official website.

    A real representative will still be there when you call back. A scammer will not. For more details, see our guide on how to tell if a phone call is a scam.

    5. Report Suspected Scams to the FTC and the FBI's IC3

    Foster specifically recommended this in the InvestigateTV segment. Reporting suspected scams helps law enforcement identify patterns and build cases, warns other consumers through public databases, and drives the data that fuels the warnings your friends and family will see in next year's news cycle.

    As AuthentiLens reported in our coverage of the FBI's 2025 IC3 report, the bureau logged $893 million in AI-enabled fraud losses in 2025. Every report helps build the cases that lead to arrests and that fund the next round of public warnings.

    6. Scan It With AuthentiLens

    You are not expected to become an AI-detection expert. That is what AuthentiLens is for. When you receive a suspicious text, email, video clip, audio clip, social profile, or link:

    • Paste the text. AuthentiLens flags scam language patterns, urgency cues, and AI-generation signals.
    • Upload the audio. AuthentiLens analyzes voice-clone artifacts, including unnatural frequency patterns and synthesized speech markers.
    • Upload the video. AuthentiLens detects deepfake face-swaps, lip-sync mismatches, and temporal inconsistencies.
    • Paste the profile URL. AuthentiLens checks for fake social media profile indicators. Our guide to AI personas on social media explains how these fake profiles are built.
    • Scan the link. AuthentiLens checks for malicious domains and known scam infrastructure without you having to click.

    AuthentiLens does all of this in seconds, before you reply, before you call, before you send a single dollar.

    The Path Forward

    Individual vigilance is the first line of defense: set the code word, audit your social media privacy settings, verify before you trust. These steps are simple, free, and effective. But consumers cannot do it alone. The fraud ecosystem is too large, too sophisticated, and too well-funded.

    Policymakers need to act as well. Stronger penalties for AI-powered fraud, mandatory disclosure of AI-generated content on platforms, and increased funding for fraud detection and enforcement are all necessary. The law should treat a voice-clone kidnapping scam as seriously as a real kidnapping threat.

    Social media platforms, payment apps, and communication tools are the vectors through which most AI scams reach victims. These platforms must invest in AI-powered fraud detection, verify high-risk accounts claiming to be lawyers or government officials, and provide clear reporting channels for users who suspect they have been targeted.

    Finally: talk to your parents about AI voice cloning. Share this article with your friends. Post a warning in your neighborhood group. Scammers count on silence. They count on shame. Breaking the silence is the most powerful thing any of us can do.

    Related Coverage From AuthentiLens

    Sources

    Stay ahead of the next scam

    One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.

    By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Scan suspicious content in seconds

    5 free scans across messages, photos, audio, video, profiles, and links. No signup needed.

    Try AuthentiLens Free