Dark forensic illustration of an electric cyan fingerprint scan reticle overlaying a contract document on a tablet, with red warning triangles at the corners, representing AI identity fraud in business agreement workflows

Half of Businesses Hit by AI Identity Fraud at $2.2M Per Attack

What happened

Why it matters

How to protect yourself

Sources

What happened

More than half of businesses across the United States, New Zealand, and Australia have been hit by AI-generated identity fraud, at an average cost of $2.2 million per attack, according to a new survey of 1,000 organizations released in May 2026 by Lumin, a Christchurch-based document-workflow company.

The CEO of the firm that ran the survey says he is one of the targets.

“Scammers impersonate me to my staff and target our accounts team with fake invoices. AI has sharpened these fraud tactics to the point where they directly threaten the trust that keeps our business ecosystem interconnected and operating smoothly.”

Max Ferguson, founder and chief executive of Lumin

The findings were reported on May 6, 2026, by RNZ (Radio New Zealand) based on Lumin's full report, titled “Digital Identity in Business: The Threats, Impact, and Opportunities.” This case file examines the survey findings, explains why agreement workflows have become a goldmine for AI-powered fraud, and provides actionable protections for small and mid-sized businesses.

The prevalence: more than half of businesses affected

According to the survey, more than half of businesses across three countries have experienced AI-generated identity fraud. The fraud takes multiple forms:

The survey found that AI-generated identity fraud is not a niche problem affecting only large enterprises. Small and mid-sized businesses are being targeted at similar rates, and they often have fewer resources to detect and respond to attacks. For the broader fraud industrialization context, see our analysis of how AI fraud has become industrialized, per Equifax's warning.

The cost: $2.2 million per attack

The survey found that the average cost of an AI-generated identity fraud attack is $2.2 million. (Note: The RNZ report did not specify whether this figure is in USD or NZD. Businesses should refer to the original Lumin report for exact currency and methodology.) This cost includes:

For comparison, the FBI's IC3 logged $2.77 billion in business email compromise losses across more than 21,000 incidents in 2024, at an average loss of approximately $132,000 per incident. The Lumin survey's $2.2 million per-attack figure is higher because it covers larger organizations and more sophisticated multi-stage attacks that go beyond a single wire transfer.

The investment gap: New Zealand lags

The geographic split inside the survey is striking:

Country Planning to Increase Identity-Verification Investment
Australia82%
United States78%
New Zealand67%

New Zealand businesses are being targeted at similar rates as their Australian and US counterparts, but fewer are planning to invest in defenses. Ferguson's warning is direct: this gap will be exploited. New Zealand's National Cyber Security Centre (NCSC) and the Australian Cyber Security Centre have both reported sharp increases in business email compromise and identity fraud reports.

The reputational consequence

The survey found that 69% of New Zealand businesses are now less willing to work with a partner that has recently had an identity-fraud incident. The cost of identity fraud is not limited to the direct financial loss. It also includes the loss of future business. A company known to have weak identity verification may find that other businesses refuse to contract with them, creating a compounding spiral.

Agreement workflows: the goldmine for AI fraud

The Lumin survey's most important contribution is its focus on the agreement-workflow layer: the moment a business signs a contract, approves a vendor, or authorizes a payment based on what looks like a colleague's or customer's identity. The report states: “With cybersecurity-threatening AI super intelligence at our doorstep, vulnerable agreement workflows are a goldmine for fraudsters.”

Agreement workflows are vulnerable because they rely on trust across multiple departments, involve high-value transactions, operate under time pressure, and were designed for a pre-AI world. The controls were built to detect forged signatures, not AI-generated voice clones or deepfake video calls.

The CEO impersonation pattern

Ferguson described the most common attack vector: “Scammers impersonate me to my staff and target our accounts team with fake invoices.” This is the CEO impersonation attack (also called “fake invoice from the boss”) that drives the majority of business impersonation scam losses. AI has made it significantly more convincing: AI can generate the CEO's writing style from past emails, create a voice clone for a follow-up call, and produce a deepfake video for a “confirmation” meeting.

90% are concerned about workflow vulnerability

Ninety percent of surveyed organizations said they are concerned that critical workflows are vulnerable to AI-powered fraud. Almost every business surveyed, across three countries and across industries, recognizes that existing controls are inadequate. The protective layer most businesses still rely on (a name, a title, an email address, a typed signature, a video call) is now synthesizable at near-zero marginal cost. For how AI-generated fake personas are constructed, see our guide on how to spot a fake social media profile .

Why it matters

The Lumin survey lands at a moment when AI identity fraud has gone global, with documented cases across at least four major economies and no signs of slowing. Ferguson's central argument is that the framing of this threat is wrong at most organizations.

“Preventing identity fraud is no longer just an IT responsibility. Businesses need to acknowledge that it can strike any department and must be addressed at the boardroom level.”

Max Ferguson, founder and chief executive of Lumin

The international context: a global problem

The Lumin survey covers the US, New Zealand, and Australia, but AI identity fraud is a worldwide problem. As AuthentiLens reported in our coverage of Equifax's fraud industrialization warning , US consumer fraud losses hit a record $15.9 billion in 2025. The FBI's IC3 logged $2.77 billion in business email compromise losses across more than 21,000 incidents in 2024. In the United Kingdom, Admiral Insurance detected £86.8 million in AI-generated fraudulent claims in 2025 , a 71% increase from 2024. And in New Zealand and Australia, NCSC New Zealand and the Australian Cyber Security Centre have both documented sharp increases in business email compromise and identity fraud reports.

The Lumin survey's finding that 82% of Australian businesses plan to increase identity-verification investment reflects the ACSC's warnings. The gap between Australia (82%) and New Zealand (67%) suggests that New Zealand businesses may be underestimating the risk despite facing it at similar frequency.

Why IT alone cannot solve this

IT departments are responsible for email security, network security, and endpoint security. They are not typically responsible for vendor onboarding verification, payment approval workflows, contract signing processes, or employee training on CEO impersonation patterns. These are business process issues, not purely technical ones. They require input from finance, legal, sales, and operations. Yet most organizations still route identity fraud to IT as a technical problem, leaving the procedural gaps unaddressed.

This is compounded by new attack techniques that bypass traditional email security entirely. As we reported in our coverage of hidden-text phishing that bypasses AI email filters , AI-generated phishing can now evade the detection layer, putting the burden on human recognition at exactly the moment employees are busiest and least focused on verification.

The “name, title, email” problem

Most businesses verify the signature on a contract, the name in an email signature, and the title on a LinkedIn profile. They do not verify that the person signing is who they say they are, that the person on the video call is real (not a deepfake), or that the person on the phone is real (not a voice clone). Each of these verification gaps was acceptable in a world before AI-generated identity at near-zero marginal cost. That world no longer exists.

Ferguson's specific recommendation cuts to the heart of it: “Industries have to move beyond simply capturing a signature and shift toward verifying the person signing.” For a concrete example of AI-generated fake identity used in a government impersonation case, see our report on a Chicago man who lost $69,000 to an AI-generated U.S. Marshals badge scam .

The reputational spiral

The survey's finding that 69% of New Zealand businesses are less willing to work with a partner that has had an identity-fraud incident creates a compounding dynamic. A business suffers an identity-fraud attack. It loses money (average $2.2 million). It also loses business partners who no longer trust them. It has less revenue to invest in better defenses. It becomes more vulnerable to future attacks. Breaking this spiral requires proactive investment before an attack occurs, not reactive remediation after.

The cost of inaction

The Lumin survey's $2.2 million per-attack figure is the average. Consider the scaling: a $2.2 million loss for a small business with $10 million in revenue is a 22% hit to the top line. For a mid-sized business with $50 million in revenue, it is still a 4.4% hit. In both cases, the reputational cost can last for years beyond the direct financial loss.

The cost of prevention, by contrast, is low. Multi-person approval workflows cost nothing to implement. Vendor verification standards require staff time but no software investment. E-signature identity-proofing features are often already included in existing subscriptions.

The boardroom checklist

Ferguson recommends that businesses ask themselves these questions at the boardroom level:

How to protect yourself

The AuthentiLens editorial team has distilled the Lumin survey findings, Ferguson's recommendations, and broader B2B fraud research into six concrete protections for small and mid-sized businesses, followed by a seven-step recovery plan if your business has already been hit.

1. Treat any inbound request to change banking or payment instructions as suspect

No payment-instruction change goes live without a verbal call-back on a number from your existing vendor file, not a number from the email that requested the change. This single rule defeats CEO impersonation emails, vendor impersonation emails, compromised inbox attacks, and AI-generated phishing emails that bypass spam filters. Call the vendor back on the number you have always used. If they answer, confirm the change directly. If the number is different from what you have on file, that discrepancy itself is a warning sign.

2. Require multi-person approval for wire transfers above a preset threshold

A single compromised inbox should never be able to move six figures. Implement a policy requiring two approvals for any wire transfer above a threshold that makes sense for your business ($5,000, $10,000, or $25,000). The second approver should use a different communication channel for verification (phone or in-person), should not share the first approver's email access, and should be trained to ask questions about suspicious requests. This rule catches the attacks that the first rule misses.

3. Verify the person signing, not just the signature

Modern e-signature platforms offer identity-proofing layers including driver's license capture with government-database verification, biometric checks (fingerprint, facial scan, or voice sample), and knowledge-based authentication against credit history or public records. Use these features on high-value contracts and vendor agreements. The extra step takes minutes. The cost of skipping it can be $2.2 million.

4. Build a vendor-onboarding verification standard

Before adding any new vendor to your accounts payable system, require: business-entity registration confirmation (verify the vendor is registered to do business with the Secretary of State or equivalent), a phone number that answers at the business's stated address, tax records that match (request a W-9 or equivalent and verify the Tax ID), and a live video meeting with the vendor representative. AI-generated vendor profiles are cheap to produce. A scammer can create a fake vendor website, fake LinkedIn profile, and fake email domain in an afternoon. The verification steps above make it significantly harder.

5. Train staff on the CEO impersonation pattern

Train your accounts payable team to recognize the pattern: the email comes from the CEO's name (check the actual sending domain), the request is urgent, the request is to pay a new or unfamiliar vendor, and the request says “do not discuss with anyone else.” The most reliable defense is a verbal code word for any urgent payment request from leadership. The CEO and the accounts payable team agree on a code word in person. A real CEO knows the word. A scammer does not. Also establish a documented escalation path that staff can use without fear of retaliation for slowing down a request that turns out to be legitimate.

6. Scan it with AuthentiLens

When you receive a suspicious vendor email, invoice, contract, or signing request from an unfamiliar party: paste the email into the Phishing Email Checker to analyze the domain, headers, and language for scam indicators. Upload the invoice or contract to the AI Image Detector to detect AI-generation signals and formatting inconsistencies. Paste the vendor's website URL into the Suspicious Website Checker to check for fake business indicators. Upload the vendor representative's photo to the Fake Profile Checker to detect AI-generated headshots. All of this takes seconds, before you add a fake vendor, before you sign a fake contract, before you wire funds to a scammer.

If your business has already been hit: seven recovery steps

  1. Stop the bleeding immediately. Freeze affected accounts. Contact your bank to reverse wire transfers if the window is still open (most banks have a narrow reversal window of hours to days). Change all compromised passwords.
  2. Preserve evidence. Save emails, invoices, contracts, and call logs. Take screenshots of fake vendor websites and LinkedIn profiles before the scammer takes them down. This evidence is needed for law enforcement and insurance claims.
  3. Report to law enforcement. In the US, file a complaint with the FBI's IC3 at IC3.gov . In New Zealand, report to NCSC New Zealand . In Australia, report to the ACSC at cyber.gov.au.
  4. Notify affected partners. If the fraud involved a vendor, customer, or business partner, notify them promptly. They may also be at risk from the same actor, and early notification limits their exposure.
  5. Review and update policies. Use the incident as an opportunity to implement the six protections outlined above. An attack that costs $2.2 million is a painful but clear signal that existing controls were insufficient.
  6. Consider cyber insurance. If you do not have cyber insurance, get it. If you do, file a claim. The average $2.2 million loss per attack is precisely what cyber insurance is designed to cover.
  7. Scan inbound vendor and executive communications going forward. Use AuthentiLens to screen suspicious emails, invoices, and vendor documents as a routine step. The FAQ covers how scanning works and what each tool analyzes.

For related coverage, see our reports on AI fraud industrialization (Equifax) , AI insurance fraud up 71% (Admiral) , and hidden-text phishing bypassing AI email filters .