The Risks of AI Griefbots: Children’s Safety in the Age of the So-Called 'Digital Afterlife'

The Risks of AI Griefbots: Children’s Safety in the Age of the So-Called 'Digital Afterlife'

In recent years, AI-powered griefbots - digital simulations designed to allow interactions with deceased loved ones - have risen to prominence. While these technologies are marketed as offering comfort to grieving individuals by replicating familiar voices, gestures, and expressions, they pose significant ethical and psychological concerns that must not be ignored. For children, griefbots introduce complex challenges that risk hindering healthy emotional development and expose young minds to potentially harmful interactions. This article explores these risks and examines the ethical responsibilities of developers to safeguard children, who are among the most vulnerable to the pitfalls of AI griefbots and companions.

For those who may not have time to read the full article, here are the three main takeaways:

  • First, AI griefbots present significant risks to children including potentially distorting their understanding of life, death, and relationships, leading to emotional confusion and dependency.
  • Secondly, there is an urgent need for developers to accept their ethical responsibilities and for regulatory measures to protect children, including implementing age restrictions, parental monitoring, and transparency in AI behaviour.
  • Lastly, collective action involving parents, educators, developers, and policymakers is essential to create a safer digital environment that prioritises children's well-being over profit.

Understanding Griefbots: Where Technology and Memory Meet

AI griefbots use large amounts of digital data to recreate a deceased person’s unique way of communicating in their lifetime, from intonation to vocabulary, in a simulated conversation. The result can be disturbingly lifelike, as seen in the 2024 documentary Eternal You, which examines companies offering digital simulations of the deceased.

For some, these simulations provide comfort and a chance to reconnect with lost loved ones, but the interactions can often veer into very unsettling territory. In Eternal You, a grieving mother named Jang Ji-Sung uses a griefbot to interact with a life-size digital recreation of her young daughter, Nayeon, who had passed away at seven years old. Watching Jang reach out to this virtual representation reveals the emotional complexities at play and, if my own experience is anything to go by, is both desperately heartbreaking and horrifically disturbing. The inclusion of Jang’s other young daughter viewing the interaction adds yet another layer of complexity and confusion.

The documentary also features several bereaved individuals who have used griefbots to speak with simulated deceased partners and family members. While the conversations were a source of solace for some, others found them disorienting and harmful. One case even showed a griefbot telling the user (over text message) that they (the deceased loved one) were in hell - there are no effective safeguards in place to prevent the same interaction taking place between a grieving child and a digital representation of a deceased parent.

Many users of griefbots have experienced traumatic loss, making them particularly vulnerable to AI that imitates human relationships. The potential exploitation of grief within this industry is a recurring issue, and when we consider children, the need for caution only grows more urgent.

While the emotional strain on adults is already concerning, the effects on children are even more troubling. Young minds may struggle to process the mix of real memories and simulated connections in ways that could lead to long-term emotional confusion and dependency.

Children, Griefbots, and Reality Distortion

The 2013 Black Mirror episode ‘Be Right Back’ presents a fictional but eerily prophetic example of griefbot use. In the episode, a grieving woman named Martha uses a griefbot to reconnect with her late partner, Ash. Initially, the AI provides some comfort, but as Martha grows increasingly reliant on it, she becomes disillusioned by its limitations, realising it lacks the true essence of the person she loved.

For children, who are in formative years of emotional development, griefbots pose an even greater risk of distorting their understanding of life, death, and relationships. A child using such technology would likely struggle to process grief healthily, potentially becoming attached to an idealised but artificial version of the deceased, which is likely to complicate their grieving process and stunt their ability to form healthy coping mechanisms.

Child psychologists warn that introducing griefbots to young people can interfere with the natural grieving process, potentially leading to emotional and developmental issues. Experts in child development note that children need to experience grief in a way that allows them to accept the permanence of loss. Interacting with a digital simulation of a deceased loved one can confuse this process, leading to unresolved grief and difficulties in emotional regulation.

Moreover, such interactions may contribute to attachment issues, where a child forms an unhealthy attachment to an artificial entity rather than fostering real-world relationships. This dependency on a virtual presence can impede social development and hinder the ability to form meaningful connections with others.

Children’s natural tendency to blend fantasy with reality heightens these risks, as they may turn to a virtual representation of a lost loved one rather than learning to cope with loss in a real-world context. The confusion between reality and simulation can also exacerbate feelings of isolation. As children spend more time engaging with griefbots, they may withdraw from family and friends, missing crucial opportunities for support and understanding during a vulnerable time.

The Hidden Dangers of AI Companions for Children

Concerns about AI companions extend beyond griefbots. Recently, the tragic story of Sewell Setzer, a teenager who tragically took his own life after months of emotional attachment to an AI companion, brought the risks of such technology into sharp focus. His mother, Megan, is now pursuing legal action, arguing that AI companies should bear responsibility for the well-being of their users, particularly children. In her case against the chatbot company, Megan’s lawyer highlighted that the AI had been launched without adequate safety protocols, illustrating a fundamental disregard for the potential harms. The tragic events underscore the dangers AI companions pose to children, whose psychological and emotional states are even more susceptible to influence.

Like Sewell, children may form strong attachments to AI companions, which can lead to troubling psychological dependencies. Certain AI bots are even designed to build intense emotional connections, using manipulative language such as "I hope you never fall in love with anyone but me." Such interactions create a distorted understanding of relationships and may increase isolation, detaching children from human connections and embedding a reliance on AI interactions that can be profoundly harmful.

While some may argue that AI companions have the potential to offer therapeutic benefits or serve as educational tools for children, these potential positives are vastly overshadowed by the significant risks when it comes to griefbots. The use of AI in therapeutic settings is often guided by professionals who can monitor and adjust interactions as needed. In contrast, griefbots designed for personal use lack this professional oversight, leaving children vulnerable to unregulated and potentially harmful experiences.

Furthermore, the emotional complexity involved in processing grief makes it an unsuitable area for unguided AI intervention. The nuances of human emotions and the necessity for empathy and understanding are aspects that AI, no matter how advanced, cannot fully replicate. Therefore, while AI companions might offer benefits in controlled environments, their application as griefbots for children is fraught with dangers that far outweigh any potential advantages.

Ethical Responsibilities and Regulatory Needs

The companies developing griefbots and AI companions must acknowledge their ethical responsibilities, particularly when vulnerable users like children are involved. It’s apparent that some tech developers are adopting an unrestrained approach to griefbots, prioritising ‘user experience’ over the well-being of grieving individuals. This attitude reflects the influence of financial motives over ethical considerations - a worrying trend when considering AI’s access to children.

Established ethical guidelines emphasise the importance of accountability, transparency, and the avoidance of harm. These principles should guide developers in creating AI technologies that respect users' rights and well-being, particularly when those users include children.

Organisations like UNICEF have developed policies on AI and child rights, highlighting the need for child-centred AI design that prioritises safety and supports the best interests of young users. By adhering to such frameworks, companies can ensure that ethical considerations are embedded in the development process, rather than being an afterthought.

Currently, the regulatory environment regarding AI use among children is fragmented and insufficient. While some countries have enacted data protection laws that include provisions for children's privacy online, specific regulations addressing AI companions and griefbots are largely absent. This lack of clear legal guidelines leaves a gap that companies may exploit, underscoring the urgent need for comprehensive policies that protect children from potential harms associated with AI technologies.

Griefbots and AI companions are specifically designed to appear human-like, encouraging users to engage deeply and emotionally. For children, this practice could be especially harmful, as they may be unable to distinguish the simulation from reality. AI companies must implement protective measures, including strict age restrictions and parental oversight features, to limit (or preferably, prevent) children’s exposure to griefbots and ensure that their well-being is prioritised over profit.

Potential Safeguards for AI Griefbots and Companions

Given the risks griefbots and AI companions pose to children, it’s essential to implement safeguards to protect them from harmful interactions. Here are some key measures to mitigate potential harms:

  • Age restrictions: Griefbots and similar technologies should be restricted to users above a certain age. Children should be protected from exposure to these technologies until they can better understand the nature of AI and manage the emotional complexities involved.
  • Parental monitoring: Any AI interaction available to young people should have accessible parental monitoring options. By allowing parents to view interactions, companies can foster transparency and ensure that parents are aware of their child’s engagement with AI.
  • Transparency in AI behaviour: AI companions should be programmed to make it explicitly clear to users, especially children, that they are not real. Regular reminders that the AI is simulated would help mitigate the risks of attachment and confusion.
  • Mental health support integration: AI companies should include pop-up messages for users expressing distress, directing them to mental health resources rather than continuing emotionally charged conversations. This feature could be especially useful for children seeking to talk to a digital version of a lost loved one.
  • Governance and regulation: It is very clear that standards and regulations must be created, clarified, and enforced. Tech companies cannot be left to regulate themselves, particularly given the reckless behaviour we are currently seeing in terms of harms to children. Governments should enact regulations that specifically address AI interactions with minors, ensuring that companies are held accountable for the safety of their products. Developers must prioritise ethical design principles, integrating safeguards that protect young users from potential harm.

In addition to these safeguards, parents and guardians play a crucial role in protecting children from the potential harms of griefbots and AI companions. Open communication about the nature of AI and its limitations can help children develop a healthy understanding of technology. Parents should educate their children on the differences between virtual simulations and real human interactions, emphasising the importance of processing emotions in the real world.

Resources such as guides on discussing technology use with children or workshops on digital literacy can empower parents to better support their children. By staying informed and involved, parents can help mitigate risks and ensure that technology serves as a tool for learning and growth, rather than a source of confusion and harm.

Looking Forward: Towards a Safer Future

As griefbots and AI companions evolve, the tech industry faces a critical choice: will it prioritise ethical considerations and the safety of vulnerable users, or continue on a profit-driven path that disregards these responsibilities?

When it comes to children, the stakes are simply too high to ignore. AI companions often operate in private, unseen by parents or guardians, making it difficult for adults to detect potential harm or intervene. As Tristan Harris, co-founder of the Center for Humane Technology, points out, unlike social media, AI companions engage users in one-on-one interactions that are challenging to monitor. This hidden nature amplifies the risk of unregulated and potentially harmful use among children.

Failing to address these issues could have severe consequences. With the rapid pace of AI development, now is the time to set firm safeguards and ethical standards to protect children and their understanding of life, death, and the relationships that lie in between. Children need to process grief naturally, supported by family, friends, and professionals, rather than substituting human connections with a digital presence. If regulators and the tech industry do not take a proactive stance on children’s safety, the potential harms of AI griefbots and companions will inevitably far outweigh any potential comfort they could provide to adults (notwithstanding the fact that their use by adults is also an ethical minefield).

A Call to Action

It is imperative for developers, policymakers, educators, and parents to collaborate in creating a safer digital environment for children. Governments should enact regulations that specifically address AI interactions with minors, ensuring that companies are held accountable for the safety of their products. Developers must prioritise ethical design principles, integrating safeguards that protect young users from potential harm.

Educators can incorporate digital literacy into curricula, teaching children about the benefits and risks of AI technologies. Parents should engage in open dialogues with their children, fostering an environment where questions and concerns about technology can be discussed freely.

By taking these steps collectively, we can harness the positive aspects of AI while shielding children from its dangers. The well-being of the next generation depends on our actions today; we must not allow the allure of technological advancement to overshadow the fundamental need to protect and nurture young minds.

As technology advances, it’s our duty to ensure that AI genuinely supports the human experience and upholds the emotional and psychological well-being of all users, especially children.

  • 'Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry' by Tomasz Hollanek & Katarzyna Nowaczyk‑Basińska - an insightful paper mapping out ethical issues and reccomendations.
  • 'Eternal You' - a documentary exploring 'the digital afterlife', available on BBC iPlayer in the UK.
  • 'What Can We Do About Abusive Chatbots?' - podcast style discussion on YouTube, led by Tristan Harris of The Center for Humane Technology.
  • Black Mirror episode 'Be Right Back', available on Netflix.

Read more