Illustration of human brain and binary code
August 4, 2021

Artificial intelligence is bringing the dead back to ‘life’ — but should it?

UCR experts discuss the technological and ethical implications of using AI to deepfake the deceased.

Author: UCR News
August 4, 2021

What if you could talk to a digital facsimile of a deceased loved one? Would you really be talking to them? Would you even want to?

In recent years, technology has been employed to resurrect the dead, mostly in the form of departed celebrities. Carrie Fisher was digitally rendered in order to reprise her role as Princess Leia in the latest "Star Wars" film. Kanye West famously gifted Kim Kardashian a hologram of her late father for her birthday last year. Most recently — and controversially — artificial intelligence was used to deepfake chef Anthony Bourdain’s voice to provide narration in the documentary film “Roadrunner.”

In what seems eerily like a “Black Mirror” episode, Microsoft announced earlier this year it had secured a patent for software that could reincarnate people as a chatbot, opening the door to even wider use of AI to bring the dead back to life.

We asked our faculty experts about AI technology in use today, the future of digital reincarnation, and the ethical implications of artificial immortality.

Amit Roy-Chowdhury

Professor of electrical and computer engineering and chair of robotics

Amit K. Roy-Chowdhury

“When we learn about some very sophisticated use of AI . . . we tend to extrapolate from that situation that AI is much better than it really is.”

— Roy-Chowdhury 


Q: Do you think it will be possible to eventually create videos and chatbots that are impossible to distinguish from real people? What are the limitations of this technology, and how long do you think it will take for AI technology to become that advanced?

A: All artificial intelligence uses algorithms that need to be trained on large datasets. If you have lots of text or voice recordings from a person to train the algorithms, it’s very doable to create a chatbot that responds similarly to the real person. The challenges arise in unstructured environments, where the program has to respond to situations it hasn’t encountered before. 

For example, we’ve probably all had interactions with a customer service chatbot that didn’t go as planned. Asking a chatbot to help you change an airline ticket, for example, requires the AI to make decisions around several unique conditions. This is usually easy for a person, but a computer may find it difficult, especially if there are unique conditions involved. Many of these AI systems are essentially just memorizing routines. They are not getting a semantic understanding that would allow them to generate entirely novel, yet reasonable, responses.

When we learn about some very sophisticated use of AI to copy a real person, such as in the documentary about Anthony Bourdain, we tend to extrapolate from that situation that AI is much better than it really is. They were only able to do that with Bourdain because there are so many recordings of him in a variety of situations. If you can record data, you can use it to train an AI, and it will behave along the parameters it has learned. But it can’t respond to more occasional or unique occurrences. Humans have an understanding of the broader semantics and are able to produce entirely new responses and reactions. We know the semantic machinery is messy.

In the future, we will probably be able to design AI that responds in a human-like way to new situations, but we don’t know how long this will take. These debates are happening now in the AI community. There are some who think it will take 50-plus years, and others think we are closer. 

Vagelis Hristidis

Professor of computer science and engineering and founder of SmartBot360

Vagelis Hristidis

“AI is still ineffective at building chatbots that can respond in a meaningful way to open-domain conversations.”

— Hristidis 


Q: The media is reporting on a man who recently uploaded text messages from his dead girlfriend to a chatbot program to create naturalistic text-based conversations to “talk” with her again. Is this sort of thing common or easy for people to do?

A: AI has been very successful in the last few years in problems relating to changing the tone or style of images, videos, or text. For example, we are able to replace the face of a person in a picture or a video, or change the words of a person in a video, or change the voice of an audio recording. 

AI has also been somewhat successful in modifying the words in a sentence to change the tone or style of a sentence, for example, to make it more serious or funnier or use the vocabulary of a specific person, alive or dead. 

Q: What are the capabilities and limitations of chatbots, and where do you see them headed in the future?

A: AI is still ineffective at building chatbots that can respond in a meaningful way to open-domain conversations. For example, voice assistants like Alexa or Siri can only help in a very limited set of tasks, such as playing music or navigating, but fail for unexpected tasks such as “find a free two-hour slot before sunset this weekend in my calendar.”

A key challenge is that language is very complex, as there are countless ways to express the same meaning. Further, when a chatbot generates a response, even a single inappropriate word can completely mess up the meaning of a sentence, which is not the case with, say, images, where changing the color of a few pixels may go unnoticed by viewers.

In the future, we will see more progress in modeling the available knowledge and also in language understanding and generation. This will be facilitated by the huge training data generated by voice assistants, which offer a great research advantage to large tech companies over academic institutions.

Eric Schwitzgebel

Professor of philosophy

Eric Schwitzgebel

“We might come to not care very much whether grandma is human or deepfake.”

— Schwitzgebel 


Q: What sort of implications does this technology have from your point of view?

A: I am struck by the possibility of a future in which we might be able to feel more and more like our departed loved ones are “really still here” through voice and video generated to sound and look like them. Programs might be designed so that artificial reconstructions of them even say the kinds of things that they — based on past records — would have tended to say. If an artificial intelligence program gains access to large amounts of text and voice and video of the deceased, we might even be able to have “conversations” with them in which they feel almost like our familiar old friends, with the same quirks and inflections and favorite phrases.

At the same time, the pandemic has launched us into a world in which more and more we are interacting with people by remote video — or at least this is true for white-collar workers. Thus, the gap between the real interactions we have with living people by remote video and interactions with reconstructed versions of the deceased could become less and less, until the difference is subtle.

Schwitzgebel expanded on this topic further in a recent post on his blog “The Splintered Mind”:

If we want, we can draw on text and image and video databases to create simulacra of the deceased — simulacra that speak similarly to how they actually spoke, employing characteristic ideas and turns of phrase, with voice and video to match. With sufficient technological advances, it might become challenging to reliably distinguish simulacra from the originals, based on text, audio, and video alone.

Now combine this thought with the first development, a future in which we mostly interact by remote video. Grandma lives in Seattle. You live in Dallas. If she were surreptitiously replaced by Deepfake Grandma, you might hardly know, especially if your interactions are short and any slips can be attributed to the confusions of age.

This is spooky enough, but I want to consider a more radical possibility — the possibility that we might come to not care very much whether grandma is human or deepfake.

Dr. Gerald Maguire

Professor of psychiatry and neuroscience

Dr. Gerald Maguire

“I firmly believe in the empowerment of individual choice.”

— Maguire  


Q: Do you think artificial intelligence could be a useful tool in helping people process grief?

A: As academics, we can only speculate as to the potential risks/benefits since no one has direct clinical experience with AI and we lack any empirical evidence. I firmly believe in the empowerment of individual choice. If a patient of mine were to ever ask my guidance on such, I would outline the above, cite the hypothesized possibilities, and allow my patient to make their informed decision.

Media Contacts

RSS Feeds