The Ethical, Social, and Political Implications of “Griefbots”

The Technology That Can Change How We Mourn

Jonathan Basom
9 min readApr 28, 2021

Introduction

In recent years, artificial intelligence (AI) has started to perform a central role in society, assisting people in various ways both commercially and personally. Replicating the human mind, AI allows certain machines and programs to replace actual human beings for an assortment of jobs and tasks. Not only can AI resemble the average human mind, but now developers are creating algorithms that allow machines to imitate specific individuals. Currently, companies and developers are attempting to create AI chatbots of deceased individuals called “griefbots.” These bots analyze various pieces of data from deceased individuals such as text messages, emails, social media posts, and attempt to replicate their communication patterns. The bots impersonate the “deceased’s cadence, tone, and idiosyncrasies.”¹ Therefore, grieving users will be able to interact with these bots as if they were communicating with their loved ones, potentially easing their suffering and distress.

Figure 1. Illustration of a Chatbot

In 2015, programmer Eugenia Kuyda attempted to create a text-based simulation of a conversation with her best friend Roman Mazurenko, who had recently passed away in a car crash. She collected all of their exchanged text messages throughout the years, filtering those that were too personal. Kuyda then fed the messages to a neural network, allowing her to develop a chatbot that would respond to her text messages as if she were texting her friend.²

Figure 2. Sample Conversation with Kuyda’s Chatbot

Several years later in 2019, data scientist Muhammad Ahmad completed his own griefbot creation after the passing of his father. Upset that his future children would be unable to meet their grandfather, Ahmad decided to collect various pieces of his father’s data, ranging from audio and video recordings to text messages and even letters and transcripts. Using previous experiences that modeled human behavior, Ahmad managed to develop a text-based program, similar to the one created by Kuyda, emulating messages from his father.¹

Ethical Analysis

The underlying ethical question regarding griefbots is “Does the privacy of the dead take precedence over the needs and desires of the living?” Ethicist John Ladd argues that the most important type of responsibility is moral responsibility. This type of responsibility concerns the future and “the duty each one of us has to watch out for what may happen to others or to oneself.” In fact, he continues to say “there are some things that everyone is responsible for”³. Technologists have the responsibility to protect both individuals, living and dead, and society as a whole. This includes protecting privacy and keeping users safe both physically and mentally. In other words, technologists must weigh the long-term implications of griefbots on both individuals and society.

Microethics

As stated by Joseph Herkert, microethics is “concerned with individuals and the internal relations of the engineering profession”⁴. Regarding griefbots, the primary individuals at stake are the deceased and their grieving friends and family. Each particular case needs to be analyzed in a contextually specific way. While researchers such as Pamela Rutledge believe that griefbots have the potential to alleviate the initial distress of death by allowing users to “make contact in a way that feels meaningful”¹, others such as Assistant Professor of Counseling at the University of Nebraska Omaha, Elizabeth Tolliver, fear that users could develop addictions. Instead of using the technology to ease the separation from a loved one, users may stay attached or even become closer to that person⁵. In fact, since chatbots imitate human conversations, they “encourage, even entice, customers to engage with them in a reciprocal human manner”⁶. Users of griefbots would already be in a weak state of mind following the death of a loved one, and those who are especially unstable and in denial may be unable to differentiate reality from fantasy. Dr. Michael Grodin, a psychiatrist and Professor of Health, Law, Ethics, and Human Rights at the Boston University School of Public Health, acknowledged that funerals exist to provide a sense of finality when somebody dies. However, technology that imitates the deceased’s behavior could “reinforce fantasies in which the dead still exist”⁷. Therefore, developers must follow Ladd’s moral responsibility for ensuring the mental safety of users by being aware of different responses to griefbots.

Just as important as the needs of the living, the rights and privacy of the dead must also be considered before implementing a griefbot. Since griefbots are designed using various pieces of personal data, some of this data could potentially reveal previously unknown information¹. If people do not want others to know matters while they are living, they likely would not want those secrets to be disclosed after they die. Such revelations could impact legacies, changing how they are viewed. According to the “Rights Approach” to ethical standards, all humans have the ability to freely choose what to do with their lives, and technologists must respect this dignity. Therefore, people should have the right to determine which data, if any, can be used to develop a griefbot.

Another challenge arises when trying to create algorithms dependent on a user’s digital footprint. This data alone is not enough to accurately create a griefbot, since people only share certain parts of their lives through platforms like social media. Many griefbots would also be programmed to incorporate the person’s interactions with other online users. In fact, the griefbot could even learn and be influenced by the data provided by the mourner⁸. These variables could cause the griefbot to behave in ways that deviate from the actual person, potentially leading to distorted memories and perceptions of that person. Like the dead’s right to privacy, the “Rights Approach” also includes the mourners’ right to not be injured, which includes tampering with memories.

In Ben Green’s book, The Smart Enough City, he talks about the issue of “tech goggles.” He describes the issue as the misperception that all problems are solvable using technology. However, he explains many problems have underlying causes that are not related to technology⁹. Likewise, the writer, Dalvin Brown, states

“humans are also highly complex and influenced by experiences that aren’t always shared via text messages.”⁵

Thus, the true underlying problem in this case study is that humans are social creatures, and once somebody dies, humans have “a desire to return to a relationship or a connection which is irrevocably gone”⁷. Therefore, regardless of how advanced and accurate griefbot technology becomes, it will be impossible to solve the underlying problem of bringing people back to life.

Macroethics

As shown through Kuyda’s griefbot development, special precaution was taken to parse out particularly personal messages from the data, and the resulting bot actually comforted her. However, what happens when griefbots are applied on a larger scale? In December 2020, Microsoft released a patent to create chatbots that would “reincarnate” people⁵. Herkert states that macroethics focus on the “collective, social responsibility of the engineering profession and societal decisions about technology”⁴. If large corporations such as Microsoft begin developing griefbots, engineers will need to consider societal impacts such as changes in power and how humans live their daily lives.

Figure 3. Microsoft Released a Patent to Create Their Own Version of Griefbots in December 2020

According to Langdon Winner’s article, Do Artifacts Have Politics?, politics is the arrangement of power, and all technological devices are constructed with underlying politics, either implicitly or explicitly. Winner uses the example of Robert Moses’s arrangement of the current highway system to demonstrate that technological designs can be influenced by motives such as control. Moses wanted highways to support only vehicles of certain social classes¹⁰. Likewise, the widespread development of griefbots could also present a power shift. Companies would require users to disclose much personal information in order to create a reasonable chatbot. Even during an interaction with a griefbot, there would be a power imbalance. When users talk to bots, they provide information, increasing not only the bot’s knowledge and power, but also the amount of data that the company possesses⁶. This is especially troublesome if users believe they can entrust griefbots with new intimate information merely because the bot resembles a loved one. This may allow organizations to collect further information to be used for other purposes. Put simply, a company that creates a griefbot not only collects the deceased person’s data, but can also collect users’ data, while users do not gain any new knowledge themselves.

People also may live differently if they know that all of their data will be stored and used to represent them in the future⁸. For instance, they may feel limited in how they can express themselves through text messages if they are aware that all of their exchanges will be gathered and analyzed. This could result in people being influenced or even feeling forced to live their lives in fixed ways in order to create certain impressions within their data.

Furthermore, the creation of a relatively accurate griefbot would be expensive as standard chatbots today already cost between several hundred and several thousand dollars a month⁵. This means only certain members of society would be able to afford them, thus, creating yet another divide between social classes. In fact, if people are not able to afford or to access technology during their lifetimes, then they may not have a digital footprint that is sufficient enough to create a griefbot. There are an expected 1.4 billion profiles that will be left behind on Facebook by 2100⁸, meaning that the majority of people will have some type of digital footprint, and anyone who does not will be part of a minority. Therefore, engineers need to consider how these differences can affect the mentality of the population. Herkert states that “collective action can often offset corporate influences”⁴. If developers sense that corporations are corruptly handling data or causing unjust societal changes, then they must intervene.

Conclusion

When deciding the future of griefbots, engineers must consider the positions of the aforementioned stakeholders. Some users may be unable to handle griefbots, while others would view them as simply a memorial. Some people may appreciate the idea that their legacies will be immortalized as a bot, while others may liken imitative griefbots to identity theft. While griefbots can never replace deceased individuals, there is no doubt that they can at least remind mourners of their loved ones. Therefore, society should attempt to “address complex problems rather than solve artificially simple ones”⁹, but if griefbots must exist, a proposal is this: all people must consent to the creation of griefbots in their likeness while they are still alive. Everyone should have the opportunity to give input and observe their own griefbot, and if they approve, they must give consent in their own digital will. For mourners wishing to use a griefbot, they must pass a mental health test. This leads to more questions such as what is required to be qualified to pass a griefbot test? While griefbots cannot solve a truly complex human problem, they manage to keep memories alive for years beyond death.

[1]: Godfrey, C. (2019, April 29). The griefbot that could change how we mourn. Retrieved March 14, 2021, from https://www.thedailybeast.com/the-griefbot-that-could-change-how-we-mourn.

[2]: Newton, C. (2016, October 06). Speak, Memory — When her best friend died, she used artificial intelligence to keep talking to him. Retrieved March 15, 2021, from https://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot.

[3]: Ladd, J. (1982). Collective and Individual Moral Responsibility in Engineering: Some Questions. IEEE Technology and Society Magazine, vol. 1, no. 2, pp. 3–10, June 1982, doi: 10.1109/MTAS.1982.5009685.

[4]: Herkert, J. (2004). Microethics, Macroethics, and Professional Engineering Societies. National Academy of Engineering. Emerging Technologies and Ethical Issues in Engineering: Papers from a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/11083.

[5]: Brown, D. (2021, February 04). Ai chat bots can bring you back from the dead, sort of. Retrieved March 15, 2021, from https://www.washingtonpost.com/technology/2021/02/04/chat-bots-reincarnation-dead/.

[6]:Murtarelli, G., Gregory, A., & Romenti, S. (2020). A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. Journal of Business Research. Retrieved March 16, 2021, from https://www.sciencedirect.com/science/article/pii/S0148296320305944.

[7]: Renstrom, J. (2018, January 02). When you die, you’ll live on as a robot. Retrieved March 14, 2021, from https://www.thedailybeast.com/when-you-die-youll-live-on-as-a-robot.

[8]: Grandinetti, J., DeAtley, T., & Bruinsma, J. (2020). THE DEAD SPEAK: BIG DATA AND DIGITALLY MEDIATED DEATH. AoIR Selected Papers of Internet Research, 2020. Retrieved March 14, 2021 from https://doi.org/10.5210/spir.v2020i0.11122.

[9]: Green, B. (2019). The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future. The MIT Press. https://doi.org/10.7551/mitpress/11555.001.0001.

[10]: Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136. Retrieved March 14, 2021, from http://www.jstor.org/stable/20024652.

--

--

Jonathan Basom
Jonathan Basom

Written by Jonathan Basom

0 Followers

Bucknell University Computer Science 2022

Responses (11)