On June 22nd, Reddit co-founder Alexis Ohanian posted a photo of his mother and his own childhood. In the photo, both wear red sweaters, embracing the mountain background.
In line with the image, Ohanian posted a video generated by AI that brought the painting to life. Mother and child embrace each other, as the wind ruffles their hair.
“Damn, I wasn't ready for how this would feel. We didn't have a video camera, so my mother and I don't have a video,” posted Ohanian on X (formerly Twitter). “We dropped one of our favorite photos on the Mid Journey as the 'Starting Frame for AI Videos'. This is how she held me. I re-watched it 50 times. ”

The post quickly went viral, earning well over 20 million views. Many people sympathized with Ohanian's actions of turning photos of his precious family into videos, but he was also heavily criticized. Many X users accused him of creating “false” memories and eroding his ability to grieve his mother in a healthy way, or seeking comfort in the interactions he produced.
The ability to turn images into videos is not limited to tools like Midjourney. Over the past few weeks, the heavily paid Elon Musk has announced “Grok Imagine” for users to generate short videos from text/image prompts. Google deployed “Create” mode in the Photos app in July to convert photos into short videos for US-based users. There are also other small platforms that offer users' photos to turn AI videos.
AI tools have been used for many years to enhance old media through a process called AI Upscaling: blurred areas, pixelation, and grain removal. Genai made this process faster and easier, but users can also morph and manipulate images with advanced tools that can remove objects and fill in missing spaces.
Technology jumps require legal questions, as permission is usually required before critical editing of copyrighted creations. Furthermore, manipulating photographs featuring people who are no longer alive presents ethical challenges. Importantly, more users need to consider the most vulnerable subject of photography: the impact on children.
Rights and Safety of Risk Children
For example, cybercriminals can now quickly create realistic AI videos for minors by simply using publicly available photos. In the past, criminals have targeted minors by generating synthetic nude photos of them to force money. One such case in the United States led to the teenager dying of suicide. His family didn't know that the child was being harassed.
Data protection lawyer and AI expert Kleanthi Sardeli, working with Vienna-based NGO Noyb, supports consumer digital rights and said that turning still images into video clips could be done for innocent reasons, but “serious implications” should also be considered.
“The lower the barrier to creating realistic content, the more you need to think about ethics, consent and context. Photos can be transformed into persuasive videos without the knowledge or consent of the painted person, increasing the risk of deepfakes, honour and abuse.”
She explained that under EU GDPR regulations, children cannot legally consent to such use of personal data, including images, until they are 16 years old.
Experts and lawmakers are calling on AI companies to implement strong guardrails to prevent AI chatbots from generating highly porn media, but many chatbots easily fire sexual content. Furthermore, AI companies and their superiors are actively promoting services. For example, a particular video shared by Musk promotes how Grok Imagine promoted the abilities of Grok Imagine, depicting fantasy-style clips of a winged woman wearing very few clothes.
Meanwhile, across the Internet, websites will attract users with morphed porn videos featuring celebrities and invite users to digitally undress victims of their choice.
“Beyond obvious dangers such as CSAM (child sexual abuse material), malicious use such as animating child photos for advertising and entertainment purposes can also put a child's privacy, dignity and autonomy at risk,” Sardelli said.

Gatekeeper and guardrail
Hindus We've contacted both Google and Xai about safeguards on these platforms to restrict users from turning children's photos into videos and whether there's a content filter that will stop the photos from turning into pornographic content or child abuse material.
A Google spokesperson said the company takes child safety seriously online and that its photo-to-video capabilities can only be used at two prompts: “subtle moves” and “I feel lucky.”
Additionally, the videos include invisible Synthid digital watermarks and visual watermarks, the company says.
“Our safety measures include a wide range of 'red teams' to proactively identify and address potential issues, as well as a thorough assessment to understand how to use the feature and prevent misuse. User feedback regarding the issue is also welcome. This is used to continuously improve safety measures and overall experience.

“Google Photos is a place to store your memories and hopes that users prioritize safety while using fun creative tools for photos of friends and family, including children,” the company said.
Xai did not respond to the request for a statement.
In the United States, the National Center for Missing Persons and Exploited Children (NCMEC) emphasizes that they are “deeply interested” about how AI is used to sexually exploit children.
“In the past two years, NCMEC's CybertiPline has received over 7,000 child sex exploitation reports involving GAI. [Gen AI]and we expect numbers to increase as we continue to track these trends,” the organization says on its website.
Meanwhile, Sardelli pointed out that while existing EU laws provided some protection measures, they weren't specifically designed with AI content in mind. This means that the EU's Child Protection Act prohibits explicitly express material, but she says it prohibits explicit material, but is less clear about synthetic media.
In India, the Ministry of Electronics and Information Technology (MEITY) has issued an advisory requesting a platform to remove morphed content (including AI deepfakes). Additionally, platforms such as Meta, Google and X have appointed India's complaint handling officers to handle complaints filed by affected users.
“AI providers are beginning to build safeguards such as detection systems and content filters, but these are uneven across platforms and are not always effective. The law is lagging behind technology. In particular, there is no comprehensive global framework to address the misuse of Genai child similarities.
“We need stronger rules of consent, transparency and accountability, along with technical standards that make it difficult to misuse children's photos.”
Safety Tips for Turning Photos into AI Video
Don't share intimate or sensitive photos with generative AI platforms when you are trying to turn them into videos.
It is best practice to convert photos to AI videos only if you have everything in the photo and the photographer's informed consent.
Do not turn other people's copyright or personal images into AI videos.
If a photo of you or someone you know becomes AI video without your consent, then report the content to the Tech platform to the complaint officer assigned to an Indian company that will be the job of verifying that it complies with Indian laws
If you are a parent or caretaker, do not share photos that showcase minors and other vulnerable people on public platforms, as these images can be easily stolen or misused by cybercriminals using AI tools
Avoid using third-party platforms to turn your personal photos into AI videos. Avoid this, especially if the image features children and other vulnerable people. Instead, choose a private or device-based photo to video AI tool for security reasons
Openly discuss with children and adolescents about the increased risk of AI deepfakes, confide in trustworthy adults, and encourage police to approach if they target transformed images of themselves
(Those suffering or those with a suicide idea are encouraged to seek help and counseling by calling the helpline number here.)
