Last week, two Buddhist groups issued statements warning followers to beware of illegal “deepfake” videos of their religious leaders – videos that use AI to make it appear as if they said or did things that they did not.
Perhaps you have seen this type before: suddenly, a familiar person appears on the screen. At first glance, they seem real, but if you look closely, you notice, for example, that the words do not match the speaker's mouth. And, of course, this person is highly recommending a product, but you know that this person would never do such a thing. Or maybe you have seen this type before and been fooled. AI is constantly learning and improving itself, so sooner or later, we will be fooled.
Just two years after ChatGPT was launched, artificial intelligence has been put into the hands of ordinary people, and the use of AI has skyrocketed across personal, organizational, and corporate domains. The Buddhist community is already seeing a lot of discussion and use of AI, with non-human monks being created to spread the Dharma, mindfulness being taught as well, and wise people in the community taking the time to address the potential dangers of AI.
Among these dangers is that AI’s increasing ability to learn from and imitate real people could lead to those images being used in unauthorized and unhelpful ways. Indeed, two communities, the Tergar Asia Foundation, part of Mingyur Rinpoche’s community, and Donyu Gatsar Ling Nunnery in India, directed by Jetsunma Tenzin Palmo, have told us that this is exactly what has happened to their leaders.
“We are aware that many of the AI-generated videos currently being shared online appear to feature Mingyur Rinpoche speaking on topics such as life and relationships in a manner unrelated to his teachings on awareness, compassion and wisdom,” Tergar Asia's statement read. Some of the content may even be contrary to Buddhist teachings and may mislead or confuse practitioners.
Donyu Gatsa Ling Nunnery is a Himalayan nunnery for women belonging to the Drukpa Kagyu sect of Tibetan Buddhism. In a statement titled “Regarding AI-generated/Deepfake Videos,” DGLN also reported that “some malicious sources have been using Jetsunma Tenzin Palmo's likeness for publicity and self-promotion.”
The two Buddhist communities in question share some common-sense advice to consider when facing the AI/deepfake situation, which boils down to this:
- Check the sources you are referencing.
- If you come across misinformation, do your best to report it and encourage others to ignore it. “For example,” DGLN writes: [Tenzin Palmo] If they speak in a foreign language (other than English), say things that go against Buddhist values, blatantly ask for money, or endorse products or brands that have nothing to do with Jetsunma's own activities, these are obvious red flags to look out for.
- If you see suspected deepfake activity, please report it to the community of the teacher or person whose image is being misused.
For more about the potential promises and pitfalls of AI in Buddhist circles, read “What AI Means for Buddhism.” Lion's Roar magazine.