Last Updated:February 04, 2026, 20:36 IST
According to a report published by The Conversation, artificial intelligence (AI) is enabling what researchers describe as a "digital afterlife"

Technology companies are now developing tools that promise continued interaction with people even after death. (AI Image)
A brief moment in the web series Mismatched 2 touched upon a question that is no longer confined to speculative fiction. In one scene, the character played by Prajakta Kohli, alongside Rohit Saraf, showcased an artificial intelligence (AI) application that enabled users to receive messages and hold conversations in the voice of someone who has died. As the presentation unfolds, the ethical weight of the innovation becomes apparent, that if technology allows continued interaction with the dead, the boundaries between remembrance and emotional closure begin to blur.
While the series treats the idea as an emotional dilemma, the real world is rapidly catching up. Technology companies are now developing tools that promise continued interaction with people even after death, allowing loved ones to hear familiar voices, seek advice, and simulate conversations long after a person is gone. What once seemed speculative is now being discussed as an emerging reality of 2026.
According to a report published by The Conversation, artificial intelligence is enabling what researchers describe as a “digital afterlife". Often grouped under the term “grief tech", these systems rely on so-called “deathbots" or “digital twins", AI-powered replicas trained on a person’s voice notes, videos, photographs, text messages and memories. The result is a chatbot or avatar that mimics speech patterns, personality traits and conversational habits, allowing the deceased to appear digitally “immortal".
The emotional appeal of such technology is obvious. For families struggling with loss, these digital versions can offer comfort and a sense of continuity. Yet the legal and ethical questions surrounding grief tech remain largely unresolved, particularly when it comes to ownership, consent and misuse.
A central concern is whether an individual legally owns their identity after death. In many jurisdictions, including Australia and to a large extent India, the law does not clearly recognise a person’s voice, face or personality as property. While copyright protects creative works such as books or films, it does not extend to one’s presence, likeness or manner of speaking. This raises a crucial question, that if an AI system generates responses using data derived from a person’s life, who owns that output – the individual, their family, or the company that built the algorithm?
India offers limited but evolving safeguards. In recent years, several public figures have invoked “personality rights" to protect their identity from unauthorised commercial use. These rights restrict the exploitation of a person’s name, image, voice or likeness without consent. Celebrities including Karan Johar, Aishwarya Rai Bachchan, Abhishek Bachchan, Anil Kapoor, Jackie Shroff, Sadhguru and Arijit Singh have approached courts to secure such protections. However, for ordinary citizens, similar legal clarity is still absent.
There are also concerns about reputational harm. AI systems are known to evolve over time, sometimes producing responses that diverge from their original training data. If a digital twin begins expressing views its human counterpart never held, or behaves inappropriately years after their death, accountability becomes murky. Questions about liability, whether it lies with the family, the platform or the developers, remain unanswered.
Mental health experts add another layer of caution. Regular interaction with AI representations of deceased loved ones, psychologists warn, may prolong grief rather than ease it, potentially creating emotional dependence and making closure more difficult. The commercial risks are equally troubling. When users sign up for grief-tech services, they often hand over deeply personal data. If a company shuts down, merges or is acquired, there is little transparency on whether a person’s digital avatar could be transferred, repurposed or even monetised.
As the concept of a digital afterlife moves closer to everyday use, the absence of comprehensive regulation leaves users navigating uncertain terrain. While grief tech may offer solace to some, legal experts warn that until governments introduce clear and enforceable laws, entrusting one’s memories, voice and identity to private companies remains a deeply risky proposition—one where comfort and control may not always go hand in hand.
Handpicked stories, in your inbox
A newsletter with the best of our journalism
First Published:
February 04, 2026, 20:36 IST
News tech Can AI Can Keep You 'Alive' Even After Death? All About The Rise Of 'Digital Twins'
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Read More

1 hour ago
