News

    AI Insurance Fraud Up 71% in 2025: Fake Watches, Fake Damage

    UK insurer Admiral detected £86.8 million in fraudulent claims in 2025, a 71% surge driven by AI-generated fake car damage, invented luxury watches, and fabricated documents.

    12 min readBy AuthentiLens Editorial
    Damaged silver car with two mismatched number plates layered on top of each other under a magnifying glass, illustrating AI-manipulated insurance evidence

    What happened

    A clearly AI-generated photograph of a gold-and-diamond watch, submitted as part of a stolen-items insurance claim. A single damaged car photographed twice, with the number plate digitally swapped between the two images to support duplicate claims. A document that never existed in the real world, generated whole-cloth and uploaded as evidence.

    These are not hypothetical scenarios from a cybersecurity white paper. They are real cases detected in 2025 by Admiral, the Cardiff-based UK insurer. And they represent a fraud surge that is reshaping the insurance industry's risk calculations.

    In an April 2026 BBC News report by Wales business correspondent Huw Thomas, Admiral's claims teams walked through specific examples of AI-driven insurance fraud. The company's detected fraud rose from £50.9 million in 2024 to £86.8 million in 2025, a 71% single-year increase.

    “We see AI that's been used to manipulate images to look like they've been damaged in a certain way, even to create and fabricate documents that were never there in the first place,” said an Admiral household-claims investigator the BBC identified only as Haith. “This is a trend across the entire insurance industry.”

    Case 1: The imaginary gold-and-diamond watch

    A policyholder submitted a claim for a stolen luxury watch. The claim included a photograph of the watch as evidence of ownership and value. The image showed a gold-and-diamond timepiece that, at first glance, appeared real.

    But the Admiral claims team spotted problems. The dial text was garbled. Letters and numbers that should have formed a brand name were misaligned or nonsensical. The diamond settings were unnaturally uniform, lacking the slight asymmetries that real jewelry has. The lighting across the watch face was inconsistent, with shadows that did not match the implied light source.

    The image had been generated entirely by an AI image generator. The watch never existed. The policyholder had never owned it. The claim was an attempt to extract money from the insurer for an object that existed only as pixels.

    Case 2: Duplicate car damage with swapped number plates

    A motor insurance claim included photographs of a damaged vehicle. The damage appeared significant, enough to warrant a substantial payout. But the claims team noticed something odd.

    The same car appeared in multiple photographs, but the number plate was not consistent. A closer analysis revealed that the same damaged car had been photographed twice from similar angles, and the number plate had been digitally swapped between the two images. The policyholder was attempting to submit duplicate claims for the same damage, using AI-edited photos to make the claims appear separate.

    This is not purely AI-generated fraud. The underlying car and damage were real. But the manipulation required AI-powered image editing tools to swap the number plate seamlessly and adjust the surrounding pixels to hide evidence of the edit.

    Case 3: Fabricated documents

    According to the BBC report, Admiral's claims team also detected claims that included entirely fabricated documents, letters, receipts, invoices, and certificates that had never existed in the real world. These documents were generated using AI language models and image generators, then submitted as supporting evidence for claims.

    “AI is being weaponised to create fake documents to make their fraud more efficient,” said John Davies of the Insurance Fraud Bureau, the UK industry's anti-fraud body.

    The two populations: opportunists and organized crime

    The Insurance Fraud Bureau told the BBC that the AI fraud trend has split into two distinct populations with different risk profiles.

    Opportunistic individual customers are using off-the-shelf AI image editors and language models to exaggerate otherwise genuine claims. These fraudsters typically start with a real incident, a minor car accident, a small theft, and use AI to amplify the damage, add items that were not actually stolen, or create supporting documents that did not exist.

    These individuals are not professional criminals. They are policyholders who, presented with an easy opportunity to increase their payout, decide to take it. The barrier to entry is low. Consumer-grade AI tools are widely available, often free, and require no technical expertise.

    Organized crime gangs, by contrast, are using AI to create entirely fake claims from scratch. These operations generate fake accident scenes, fake damage photographs, fake witness statements, and fake supporting documents. They submit claims across multiple insurers simultaneously, using synthetic identities to avoid detection.

    “Organised criminal gangs are using AI to make their fraud more efficient,” Davies told the BBC. “They can generate hundreds of fake claims in the time it would have taken to manually fabricate one.”

    The numbers: a 71% surge in detected fraud

    Admiral's fraud detection numbers tell a clear story. In 2024, the company detected £50.9 million in fraudulent claims. In 2025, that figure rose to £86.8 million. That is an increase of £35.9 million, or 71%.

    The 2025 total of £86.8 million is the highest in Admiral's history. The company attributes a significant portion of that increase to AI-enabled fraud.

    Admiral is not alone. The Insurance Fraud Bureau, which coordinates anti-fraud efforts across the UK insurance industry, told the BBC that AI-enabled fraud is “a trend across the entire insurance industry.” Other major UK insurers, including Aviva, Direct Line, and LV, have reported similar surges in detected fraudulent claims over the same period.

    The industry response: fighting AI with AI

    The insurance industry is not standing still. Admiral, the Insurance Fraud Bureau, and other UK insurers told the BBC that they are “heavily concerned” and “investing in technology” to detect AI-generated fraud.

    Admiral's anti-fraud software caught the cases the BBC profiled. The company uses metadata analysis to detect missing or inconsistent EXIF data, forensic image analysis to identify AI-generation artifacts (inconsistent lighting, garbled text, unnatural symmetry), pattern matching to link claims across multiple policyholders and identify synthetic identity rings, and human review for high-value or suspicious claims.

    “The industry is investing heavily in technology to detect these fake images and documents,” Davies said. “But the fraudsters are also investing in technology. It's an arms race.”

    Why it matters

    The broader UK context: £3 billion a year, £50 per policyholder

    The Admiral numbers are a single data point in a much larger picture. According to industry analysts cited in the BBC report, insurance fraud in the UK already exceeds £3 billion annually. That cost is passed directly to policyholders. The average UK insurance premium includes approximately £50 to cover fraud-related losses.

    AI-enabled fraud is not replacing traditional insurance fraud. It is supercharging it. Cases that would have been too difficult or expensive to fabricate manually, a luxury watch that never existed, a duplicate claim with swapped number plates, a complete set of fake documents, are now simple enough for an individual with a smartphone and an AI app.

    The US parallel: a growing threat

    While the BBC report focused on the UK market, the same trends are visible in the United States. The Coalition Against Insurance Fraud estimates that US insurance fraud exceeds $80 billion annually. The National Insurance Crime Bureau has issued multiple warnings about AI-generated imagery being used in auto, property, and workers' compensation claims.

    As AuthentiLens reported in our coverage of Equifax's fraud warning, US consumer fraud losses hit a record $15.9 billion in 2025. Insurance fraud, including auto, home, and health insurance claims, represents a significant portion of that total. The FBI's 2025 IC3 report documents another $893 million drained from Americans by AI-enabled fraud schemes over the same period.

    How the fraud works: the technical playbook

    The BBC report and supplementary coverage from Digital Trends, Honest John, and Insurance Business UK describe four distinct technical methods fraudsters are using.

    Method 1: Text-to-image generation for fake items

    Fraudsters use AI image generators like Midjourney, DALL-E, and Stable Diffusion to create photographs of items that never existed. A luxury watch. A designer handbag. A piece of expensive electronics.

    These images are submitted as evidence of ownership and value. The fraudster claims the item was stolen or damaged and seeks reimbursement. The insurer has no way to verify that the item ever existed because the fraudster never owned it.

    Detection challenge: High-quality AI-generated images can be indistinguishable from real photographs to the naked eye. Insurers must rely on forensic analysis, metadata inspection, and consistency checks. Our guide on how to tell if a photo is fake or AI generated breaks down the same artifacts Admiral's team learned to spot.

    Method 2: Image manipulation for damage exaggeration

    Fraudsters use AI-powered image editing tools to modify real photographs of real damage. A small scratch becomes a massive dent. A minor fender bender becomes a totaled car.

    These tools can:

    • Add realistic-looking damage that was not present
    • Remove evidence of pre-existing damage
    • Change the color, lighting, and perspective of existing damage
    • Clone damage from one part of the car to another

    Detection challenge: Unlike fully AI-generated images, manipulated images start with a real photograph. The underlying car, location, and lighting are real. Only the damage is fake. This makes them harder to detect.

    Method 3: Document generation for fake supporting evidence

    Fraudsters use AI language models (like ChatGPT, Claude, and Gemini) and document generators to create fake supporting documents:

    • Repair invoices from fictional garages
    • Police reports for staged thefts
    • Medical records for exaggerated injuries
    • Witness statements that never happened
    • Receipts for items that were never purchased

    Detection challenge: AI-generated documents can be indistinguishable from real ones. They use correct formatting, plausible language, and realistic dates and amounts. Fraudsters can generate hundreds of unique documents in minutes. This is a direct application of the failure mode we explain in our guide on what AI hallucinations are: language models confidently generating plausible content that has no basis in reality.

    Method 4: Synthetic identity plus fake claim

    The most sophisticated fraudsters combine synthetic identities with AI-generated claims. They create a fake person using a real Social Security number or National Insurance number, a fabricated name and address, and a plausible credit history.

    That synthetic identity then submits a fully AI-generated claim. Fake damage photographs. Fake documents. Fake witness statements. The insurer has no real person to investigate because the policyholder never existed.

    Detection challenge: This is the hardest form of fraud to detect because every piece of evidence is fabricated. There is no “tell” in any single document. The fraud is only visible in the pattern, and traditional fraud detection systems are not designed to look for patterns across synthetic identities. Video-based claims (think dashcam clips and deepfaked witness statements) raise an additional layer of risk that we cover in our guide on how to tell if a video is a deepfake.

    Why this matters beyond insurance

    The Admiral warning is about insurance fraud specifically, but the underlying pattern applies to every business that accepts photo or document evidence.

    Rental damage claims. A property manager receives photos of a damaged apartment. Are the photos real? Are the damages exaggerated? AI image manipulation can make a small stain look like a carpet replacement is needed.

    E-commerce returns. A customer submits a photo of a “damaged” item and requests a refund. The photo shows damage that was not present when the item shipped. AI can add realistic-looking damage to any product photograph.

    Expense reports. An employee submits a photo of a receipt for a business lunch. The receipt was generated by AI. The lunch never happened.

    Marketplace listings. A seller on Facebook Marketplace, eBay, or Craigslist posts photos of a valuable item. The item does not exist. The photos are AI-generated. The seller collects payment and disappears.

    Used car sales. A private seller posts photos of a car in excellent condition. The photos are AI-enhanced to hide rust, dents, and mechanical issues. The buyer discovers the truth after purchase.

    Every defensive perimeter that accepts photo or document evidence is now porous. The traditional authenticity signal, a photograph as proof, is as synthesizable as a name or a Social Security number.

    The future: what is coming in 2026 and beyond

    The 71% surge in detected fraud at Admiral is not a one-time spike. Industry analysts project that AI-enabled insurance fraud will continue to grow throughout 2026 and 2027, driven by:

    • Better AI tools. Each generation of AI image generators produces fewer artifacts, making detection harder.
    • Lower costs. As AI tools become cheaper or free, the barrier to entry for opportunistic fraudsters falls.
    • Organized crime adoption. Criminal gangs are investing in bespoke AI tools trained specifically to evade detection.
    • Cross-industry spread. Techniques developed in insurance fraud are migrating to e-commerce fraud, returns fraud, expense fraud, and marketplace fraud.

    The detection arms race will continue. But the fundamental asymmetry remains: fraudsters only need to succeed once. Defenders need to succeed every time.

    The only durable defense is to build verification into every workflow. Assume every photograph could be fake. Verify before you pay.

    How to protect yourself

    The AuthentiLens team has distilled the Admiral cases and the broader AI fraud pattern into six concrete protections for businesses that accept photo or document evidence.

    1. Treat photo evidence as verifiable, not verified

    Insurance claims, returns, expense reports, listings, marketplace transactions. Every one of these now uses AI-generated imagery as a fraud vector at industrial scale. Run incoming images through detection before relying on them as proof of anything. Do not assume that a photograph is real because it looks real.

    2. Look at the metadata

    Original smartphone photos carry EXIF data: timestamp, GPS coordinates, device model, lens information, and sometimes even the photographer's name. AI-generated images typically lack this data entirely or carry inconsistent, obviously fake metadata.

    If a “photo of a damaged car” has no EXIF data and no thumbnail preview, that is a red flag. If the GPS coordinates place the car in a different country than the policyholder's address, that is another flag. If the timestamp is impossible (before the policy was active, after the claim was filed), that is a near-certain sign of fraud.

    3. Zoom in on the small details

    The BBC piece highlighted a watch with the kinds of inconsistencies AI image generators still leave behind:

    • Text that does not read right. Brand names with garbled letters, serial numbers that are not sequential, dates that are not possible.
    • Brand markings slightly off. Logos that are almost correct but have wrong proportions, missing elements, or extra details.
    • Pavé jewelry that looks too regular. Real diamonds set in metal have slight variations in spacing, angle, and depth. AI-generated jewelry is unnaturally uniform.
    • Background objects that do not make sense. A car “parked” in front of a wall that is also its own reflection.

    Real luxury items photographed in real light have asymmetries, imperfections, and contextual clues. AI-generated images are often too perfect.

    4. Check for object duplication and inconsistent lighting

    AI-edited photos often introduce artifacts that a careful reviewer can spot:

    • A second number plate where only one should exist (from a copy-paste edit)
    • A dent that appears in two different places on the same car (cloned damage)
    • Shadows that do not match the implied light source
    • Perspective inconsistencies where objects that should be parallel are not
    • Color mismatches where the same object has different colors in different photos

    A 30-second comparison against a reverse image search (Google Lens, TinEye) will catch most amateur fakes. If the same photo appears in multiple claims from different policyholders, that is a clear sign of fraud. Suspicious links inside a claim packet deserve the same treatment, see our guide on how to check if a link is suspicious before clicking anything inside a claim email.

    5. For business owners, require a second photo with a date-stamped object

    This is the single most effective low-tech defense against AI-generated fraud. Require claimants to include a second photograph that shows the claimed damage or item alongside a date-stamped object:

    • A receipt from the same day
    • A same-day newspaper
    • A unique session code printed on a sticker and affixed to the item
    • A handwritten note with the date and the claimant's name

    This anchors the photo in real time and raises the cost of fakery by an order of magnitude. Fraudsters can generate an AI image of a damaged car. They cannot easily generate an AI image that also includes a specific, unpredictable, time-stamped object that they did not know would be required.

    6. Scan it with AuthentiLens

    You are not expected to become an AI-image-detection expert. That is what AuthentiLens is for. Upload any image, photo, or document into AuthentiLens, and our detection engine will:

    • Flag AI-generation signals (inconsistent lighting, garbled text, unnatural symmetry, missing metadata)
    • Identify common manipulation patterns (cloned objects, swapped number plates, unrealistic damage)
    • Compare against known fraud databases (has this image appeared in other claims?)
    • Return a confidence score for AI-generated content

    We do this in seconds, before you approve a claim, process a return, or reimburse an expense.

    Reviewing a claim, a return, or a marketplace listing today?

    Run the photo and any supporting documents through AuthentiLens before you pay. You get 5 free scans. Verify before you trust.

    Scan it with AuthentiLens →

    Sources

    Stay ahead of the next scam

    One short briefing per week on the newest scam tactics, deepfakes, and fraud trends, straight from the AuthentiLens editorial desk.

    By subscribing, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Scan suspicious content in seconds

    5 free scans across messages, photos, audio, video, profiles, and links. No signup needed.

    Try AuthentiLens Free