technology

What IT leaders need to know about AI-fueled death fraud

computerworld • 26 Mar 2026, 12:00

What IT leaders need to know about AI-fueled death fraud

Death is always an unpleasant topic, typically ignored until it is fully upon us. But for IT leaders, fraudsters who use fake death documents generated by AI to steal data and commit a wide range of other crimes are simply too dangerous to ignore.

There are two different forms of these death frauds: tricking an enterprise into falsely believing that its customer is dead, and fraudulently leveraging an actual customer death. In both cases, the goal is to gain control of the deceased’s account and/or data by pretending to be their next of kin.

These crimes take advantage of two sides of technology. Using genAI capabilities to create all-but-perfect replicas of various types of death certificates, the fraudster uses powerful technology for a nefarious purpose. The fraud works because of a gaping technology hole: the absence of standardized, continually updated government databases that organizations anywhere in the world could consult for official information about deaths and next of kin.

“Most customer identity systems assume the user who created the account will remain the person interacting with it,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “Authentication methods, password recovery, and multifactor verification are all designed around that assumption. When the individual behind the account dies, the system is suddenly dealing with a situation it was never designed to manage.”

Gogia added: “Fraud involving false death claims is not hypothetical. It is already happening, and the conditions that enable it are becoming more favorable for attackers.”

Valence Howden, an advisory fellow and distinguished analyst at Info-Tech Research Group, paints the problem as one that is dangerous, partially because so few enterprises are considering it a serious threat. 

“It’s a huge issue, as it is still seen as an outlier issue. Deepfake use has expanded because it is now so much easier to do,” Howden said. “There is a risk to [enterprise] reputation, legal, and compliance. I don’t think people realize how much it is happening now.”

Melody Brue, principal analyst for Moor Insights & Strategy, also sees death fraud as a massive enterprise problem.

“Post‑mortem identity abuse — real or fake — is a real operational risk for every digital platform, not just for banks, because bad actors can use account history, relationship graphs, or credential trails to socially engineer far larger frauds elsewhere,” Brue said. 

A new avenue for fraud

The threat of genAI deepfakes of identification documents has been discussed for years, but the specific use case of either faking a customer’s death or faking the credentials of the next of kin of an actual dead customer has gotten relatively little attention. That lack of focus is a major problem, given the many ways that fraudsters can use that access and/or information. 

The situation does depend on the enterprise’s industry, with highly regulated verticals — especially finance and healthcare — facing a different reality. That is because regulators so highly value money transfers and ultra-sensitive files such as health records that they have placed legal hurdles for transferring those controls. In the absence of a joint account holder or designated beneficiary, for example, a bank must freeze a user’s account upon an alleged death, and then it’s up to the courts to unfreeze it.

The problem is that other verticals also face serious threats without the same regulatory guardrails. Consider an airline or hotel chain where an attacker can steal and use points to game free flights or hotel rooms. That may not be legal currency, but it’s a financial hit for those enterprises just the same.

But the most likely scenarios are where the attackers would use account control to stage highly credible social engineering campaigns aimed at people who might have interacted with the victim for years. The ability to access data such as a history of personal interactions, purchases, or travel could enable effective phishing attacks for yet more information — and outright theft.

Online accounts often hold a wealth of valuable data, such as the user’s home address, stored payment card information and other credentials, relationship data including the addresses of relatives, and photos tagged with people’s names. The fraudster could use that information to convincingly impersonate the account holder and con money, goods, or more data from their close contacts. This is especially likely if the victim is a high-value target, such as a prominent executive or a wealthy person.

And the fake identity documents problem could have a horrible reverse effect: it could make it far more difficult for the real user to convince the company of their identity and to regain control. As the cliché goes, fool me once…

This problem also goes beyond B2C companies. For a typical B2B enterprise, the tactic can be used to convince the enterprise that the primary contact for a business partner has passed away and that the fraudster has been hired to replace them. This attack is easier to foil, but not all enterprises routinely investigate such claims.

Verification hurdles

A big part of the problem is that the procedures that businesses rely on to verify a customer’s death are simply ill-equipped to handle today’s AI-intensified fraud, Greyhound’s Gogia noted. 

“Today it is possible to generate convincing certificates, legal letters, and administrative forms quickly and at scale. An attacker can produce multiple versions of a document and test them across different organizations until one passes review,” Gogia said.

“Another challenge is the absence of reliable verification infrastructure. Many enterprises assume there must be a central database that confirms whether someone has died. In reality, those databases are fragmented. Some are restricted to government agencies. Others are not updated quickly enough to support real-time verification. Cross-border verification is particularly difficult,” he said.

With international claims, “documents may originate from courts in different countries with unfamiliar legal formats. Death certificates and probate orders may be issued in different languages,” Gogia said. “Customer support teams rarely have the expertise required to authenticate those documents with complete certainty.”

Making matters worse is the extreme level of interconnectedness among various app accounts, said Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey.

“There is an identity sprawl, where a user’s digital footprint is not just tied to one account,” Greis said, offering Google and Apple accounts as an example. Those identities are often used as the credential authentication for many partner sites, meaning that death details shared from them can unlock access to myriad other enterprise accounts. “It’s a systemic industry problem.”

And some enterprises can be hurt by their own proper customer service training, especially if that training includes deference and courtesy for such death claims, Gogia said. 

“Bereavement workflows are designed around empathy. When someone reports that a customer has died, the organization usually tries to make the process easier for the family. Customer service representatives are trained to respond with sensitivity rather than suspicion,” Gogia said.

“That approach is entirely understandable, but it also means these workflows are not always designed with adversarial scenarios in mind,” Gogia continued. “A fraudster who claims that a living customer has died can exploit that dynamic. If the organization accepts documents without verifying the event through another channel, the attacker may succeed in triggering account changes or obtaining sensitive information. The real account holder may not even realize what has happened until they attempt to access the account themselves.”

In many cases, companies simply don’t bother policing death requests because death fraud doesn’t significantly impact their bottom line. Even if other businesses or individuals are harmed as a result of the original company’s negligence, that first company is usually not held accountable. It boils down to what Howden calls “the linkage problem.” 

Let’s say that the enterprise in question is a major retailer. They get conned by the fraudster and turn over the account information to someone they incorrectly thought was the next of kin. Within a week, the fraudster uses the information gained from that account to steal money or credentials from a half-dozen people in the not-deceased victim’s circle. 

That’s where the lack of linkage comes into play. It’s highly unlikely that authorities would identify that some of the data the thief used came from successfully conning that retailer. That would mean that the retailer would likely avoid compliance pain or a lawsuit. To fix the problem, we need better linkage and accountability so all businesses are incentivized to clean up their own backyard.

Imperfect remedies

Some enterprises have opted to sidestep the issue by closing the account of a dead customer and sealing it, choosing to give neither access nor data to anyone, even a legitimate heir. 

Info-Tech’s Howden said he is “not a fan” of sealing or wiping the deceased’s account — even after positive confirmation has been received — because of legal and compliance issues. “Do you actually own the person’s information? What are the implications of the loss of that data to the family?” Howden asked.

Former federal prosecutor and current cybersecurity consultant Brian Levine, executive director of FormerGov, argued that if an enterprise chooses to have a policy that a deceased person’s account gets frozen or deleted, that enterprise must explicitly tell users that. “The terms of service should say that if the account holder dies or otherwise becomes incapacitated,” this is what happens to the account, Levine said. 

A good technique for trying to verify a death claim is to carefully examine the usage history and patterns, Levine said. For instance, let’s say the claim is that the person died on March 1. Was there any activity at all on that account after that date? 

“You could confirm that nobody has accessed for a certain amount of time. You also know what that account’s range of normal behavior is,” Levine said. If the user typically only logs in once every two months, he said, not logging in for three weeks may mean nothing. But if the user typically logs in twice a day, seven days a week, that activity halt could support the claim.

Some enterprises are beginning to offer digital legacy planning options that let users designate who should have access to their account when they die. This approach could theoretically help reduce death fraud, but such options are typically lacking on several counts: used inconsistently, not required, not detailed enough, and rarely updated.

For example, these “death forms” often assume that all digital assets go to one designated person, noted Dean Saxe, the founder and co-chair of the Death and the Digital Estate (DADE) community group at the OpenID Foundation. “I might want some of that data to be destroyed, some should go to my wife, some to my kids and not my wife,” Saxe said.

Nader Henein, a Gartner VP analyst, said the typical bureaucratic procedures used by many enterprises can prove to be a good thing when it comes to death fraud. “It takes a long time for them to work through the process, and most fraudsters are not willing to wait 60 days,” Henein said. 

But a problem crops up as initially strict policies get weaker over time. “Most organizations tend to set up a process for this the first time it happens. The process starts as very cumbersome, but then it gets watered down to the bare minimum,” Henein said. 

That’s why it’s important for IT to work with HR, customer support, and other company leaders to develop and codify customer death policies that can’t be circumvented.

AI agents and delegated authority

Mike Kiser is the director of strategy and standards at SailPoint and also serves as another co-chair of DADE. Kiser said that autonomous agentic systems both parallel and complicate the death fraud problem.

“The current approach is impersonation, [where] survivors use the deceased’s passwords. Some AI agents operate the same way, reusing credentials to act as someone rather than on behalf of someone,” Kiser said. “This is insecure and presents legal problems.”

There’s a better way, Kiser noted: “OAuth Token Exchange exists as a technical solution that enables proper delegated authority [in that it] proves Bob is authorized to act for Alice, not just that Bob has Alice’s password. But it’s not widely adopted.”

A related issue is that agentic systems “can now create posthumous avatars and deepfakes of deceased individuals, raising questions about consent and control,” Kiser said. “Unauthorized re-creations have already generated legal disputes, yet no frameworks exist for people to specify whether, or how, their likeness should be used after death.”

This also raises questions about how identity authentication systems today handle autonomous agents. 

“Moving from delegated access, as in having credentials, to delegated authority requires work on multiple fronts. For AI specifically, even with proper delegation, how do you prove an AI avatar was authorized to exist in the first place?” Kiser asked. “There’s no legal framework for what constitutes consent, especially posthumous consent. It’s crazy murky at best.”

Developing and implementing international standards for delegated authority would help address both issues. “Building delegated authority infrastructure for AI agents would solve digital estates, too,” Kiser said. “It is the same fundamental problem.”

Les originalartikkelen

Relaterte artikler etter nøkkelord