This AI Voice Is Convincing People It’s Their Dead Relative — And That’s Scaring Everyone
EDUCATIONFEATURED


A viral app uses AI voice cloning trained on short audio samples to recreate voices of deceased loved ones. While marketed as “comfort tech,” many users say the experience feels deeply unsettling. Psychologists warn of emotional dependency — but downloads continue to skyrocket.
In 2025, artificial intelligence has taken one of the weirdest and most emotionally loaded corners of the internet — convincing people they’re hearing the voice of a loved one who has passed away. This isn’t sci-fi anymore; it’s happening right now, and it’s forcing a global reckoning over technology, grief, consent, and emotional manipulation.
AI voice-cloning tools have become astoundingly powerful. Companies such as Respeecher, a Ukrainian startup specializing in speech synthesis, enable one person to speak in the voice of another, capturing emotional cues, tone, and cadence to produce speech that sounds eerily real. It’s no longer about generic robot speech — these systems can mimic nuanced human intonation so convincingly that even close family members have trouble distinguishing fake from real.
Coupled with chatbots and digital avatars known as “deadbots” or “griefbots,” which use machine learning to create interactive digital representations of deceased individuals, these technologies are reshaping how families remember — and interact with — those they’ve lost. A deadbot can be a virtual assistant, animated avatar, or text-to-speech entity that responds in the personality and style of someone who’s no longer alive.
This convergence of grief and generative AI has both inspired and alarmed people around the world. For some, hearing the voice of a late spouse or parent has been strangely comforting. Others report deep discomfort — describing a surreal encounter that blurs memory and reality. One recent Reuters feature described people interacting with digital recreations of loved ones using voice clones and avatars, sometimes on significant anniversaries or in moments of emotional need.
But while the technology can feel like a “second chance,” it also opens the door to manipulation and grief exploitation. Unexpected calls or messages that appear to come from a lost loved one can trigger intense emotional reactions — and those same tools are increasingly used for malicious deepfakes and scams. In the U.S., for example, scammers using AI-cloned voices have tricked elderly victims into sending money by impersonating their children — showing just how convincing this tech can be and how easily it can be abused.
Experts warn that existing laws have struggled to keep pace with these developments. There’s no comprehensive framework to regulate whether an AI system can legally “speak” in a dead person’s voice — or to determine who controls that voice after death. Artistic use is one thing, but when these voices interact with the public without consent or clear disclosure, ethical boundaries begin to crumble.
Commercially, voice cloning is booming beyond grief tech. From Hollywood dubbing to text-to-speech services for content creators, AI voices — both synthetic and celebrity-inspired — are becoming widespread. Some services even allow licensing of famous voices for ads or narration, raising questions about ownership and posthumous rights.
People have always tried to remember the dead — through photos, letters, or memorials. But AI doesn’t just remember the deceased; it recreates them. Hearing a voice that sounds like someone who no longer exists — especially when it can respond or speak back — isn’t just uncanny, it forces us to ask whether we’re preserving memory or manufacturing illusions. In the age of AI, the line between memory and imitation is legally thin and emotionally fractal — and that’s what makes this trend so unsettling and so deeply viral.


