AI Griefbots: Protecting Children in the Era of Digital Afterlife
Protecting Children from Digital Ghosts
Losing a loved one is one of the most difficult experiences we face. Now, artificial intelligence is offering new ways to remember and even “interact” with the deceased.
These developments raise profound ethical and psychological questions - particularly for children, who may not fully understand the boundary between reality and simulation.
While some argue that griefbots provide comfort through "continuing bonds" with lost loved ones, concerns have been raised by child psychologists, grief counsellors, and AI ethicists about their potential to disrupt the natural grieving process.
Currently, griefbots are being deployed without meaningful safeguards or a full understanding of their long-term impact - particularly for children.

The Technology Behind Griefbots
Griefbots are AI-driven chatbots that simulate conversations with deceased individuals using personal data such as emails, text messages, social media posts, and even voice recordings. Some services market these tools as a way to preserve legacies or "keep the conversation going" long after a loved one has passed.
These systems rely on large language models (LLMs), which analyse linguistic patterns and predict likely responses. Some advanced versions incorporate sentiment analysis to adjust responses based on the user's emotional state. Future iterations may integrate visual and VR elements, creating even more immersive experiences.
While this technology is advancing rapidly, it also raises significant ethical and psychological concerns. The ability to generate a seemingly “living” digital version of the deceased may provide comfort for some, but it also blurs the line between memory and simulation - raising difficult questions about consent, psychological impact, and potential exploitation.
Psychological Risks for Children
For children, whose cognitive and emotional frameworks are still developing, griefbots pose unique and potentially serious risks:
• Confusion about death and permanence – Young children often struggle to grasp the finality of death. If they are able to continue speaking to a deceased loved one through AI, it may reinforce the false belief that death is temporary or reversible, disrupting their ability to process loss in a healthy way.
• Disrupting natural grief processes – Psychological research highlights the importance of accepting and processing grief. If a child becomes emotionally dependent on a griefbot, they may avoid facing the reality of loss, potentially leading to prolonged grief disorder or difficulty forming emotional resilience.
• AI-generated memory distortions – AI models do not “remember” the deceased in the way a human does. Instead, they generate responses based on statistical probability, sometimes fabricating details or reinforcing distorted narratives. This could create confusion for a child trying to make sense of their loss.
• Unrealistic dependency and attachment risks – AI griefbots provide predictable, comforting responses, something real human relationships cannot always offer. This could lead children to develop unrealistic expectations of emotional support, affecting their ability to build healthy, reciprocal relationships in the future.
Ethical Concerns and Lack of Safeguards
Even for adults, griefbots present significant ethical dilemmas, including issues of consent, data security, and emotional manipulation. However, the risks are even greater for children, who are more vulnerable to influence and emotional distress.
One of the most pressing concerns is the commercialisation of grief. Some griefbot providers use a subscription-based model, meaning access to an AI-generated loved one could be removed if payments lapse. Others could monetise interactions by subtly introducing product recommendations.
This type of emotional exploitation is troubling, yet without meaningful regulation, it remains a real possibility.
Current Regulations: Are They Enough?
At present, griefbots exist in a largely unregulated space. Some AI legislation, such as the EU AI Act, provides general oversight, such as requiring AI systems to disclose their artificial nature.
Some platforms do include disclaimers - such as periodic reminders that the user is interacting with an AI. However, research into AI companion tools suggests that when people form strong emotional bonds with AI, they often ignore such reminders, treating the system as though it possesses real consciousness. The effectiveness of these disclaimers remains an open question.
Many experts argue that these measures fail to account for the profound emotional and psychological impact griefbots may have - particularly on children - and that regulation in other jurisdictions is absent.
What Needs to Happen Next?
If griefbots continue to evolve, they must be developed with ethical oversight and meaningful restrictions - particularly when it comes to children.
While some AI applications may offer benefits in bereavement support for adults under professional guidance, the risks for children are far too great to ignore.
Key Recommendations:
✔ Restrict griefbots for children – Until long-term research demonstrates their safety, minors should not have access to griefbots.
✔ Develop psychological safeguards – Any future use of griefbots should be guided by trained professionals to assess their impact on an individual’s grieving process.
✔ Establish consent frameworks for AI-generated digital identities – The creation of griefbots should require clear, informed consent. No individual should be digitally “resurrected” without prior approval.
✔ Prohibit commercial exploitation – AI griefbots should not engage in targeted advertising or monetise interactions, particularly in ways that exploit vulnerable individuals.
✔ Incorporate AI ethics into bereavement counselling – Mental health professionals should play a central role in shaping how griefbots are used and ensuring they do not interfere with healthy grieving.
Final Thoughts
The debate over griefbots is not just a technological issue - it is a question of human dignity, emotional well-being, and ethical responsibility.
While AI may have a role to play in bereavement support in the future, it is far too soon to assume these tools are beneficial - particularly for children. Without meaningful research, regulation, and oversight, we risk allowing griefbots to shape the way children process loss in ways that we do not yet fully understand.
Until research proves otherwise, these tools should be withheld from minors. Grief is an inevitable part of life, and children must be given the opportunity to process it in ways that promote healing, not dependency on artificial simulations.

Further Material:
You may be interested in some of the following resources to learn more; all are intended for adults, not children:
- Hollanek, T., & Nowaczyk‑Basińska, K. "Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry" (link)
- 'Eternal You' documentary (also on BBC iPlayer, UK)
- 'What Can We Do About Abusive Chatbots?' - podcast style discussion on YouTube, led by Tristan Harris of The Center for Humane Technology.
- 'Be Right Back', Black Mirror (Netflix)