On a thorough examination, she realized it was an AI-generated image used to describe sentimental posts. It was the second time she had been mostly fooled. Previously, she mistook the video titled “Meet a Retirement” for actual footage.
Despite working in the media and frequently encountering AI-generated content, Linh acknowledged that it is difficult to communicate what is real and what is not because the technology has advanced so quickly and realistically.

The experts agree. Tools like Google Veo 3, Kling AI, Dall-E 3, and Midjourney have created photos and videos with near-perfect realism.
Do Nhu Lam is the director of training at the Institute of Blockchain and Artificial Intelligence (ABAII), explaining that with advanced multimodal technology and sophisticated language models, these tools can sync visuals, audio, facial expressions and natural movements to generate highly compelling content.

Lamb acknowledged the possibilities of AI in content creation, advertising, entertainment and education. However, this incredibly ability to replicate reality blurs the line between actual fakes and fakes, poses important ethical, security and information governance challenges.
The posts they encountered had interactions with nearly 300,000 people and over 16,000 comments, and users enthusiastically congratulated their “parents” without realizing that the images were fake. Some warning users criticized others for being fooled by AI.
Videos generated by AI are becoming more and more common across Facebook groups. With the launch of Google Veo 3, the quality of the video has been improved, especially by adjusting the lip movement to the voice.
Pay attention to the AI era

Media generated by AI poses serious risks, especially for vulnerable or untechnical users. Vu Thanh Thang, AI chief officer at SCS Cybersecurity Corporation, warned that criminals are exploiting AI through fraud, biometric spoofing and impersonation.
Thang added that companies are also targeted. AI Deepfakes impersonate staff to bypass security, manipulate face recognition, and mimic executives to damage reputation or launch scams.
NHU Lam outlined three important risks of individual AI. Financial fraud, honorific loss, and misuse of personal information. Regarding the company, it cited an incident involving Arup, where employees at the Hong Kong branch lost US$25 million to transfer funds during a deepfake video conference.
Another serious consequence is the erosion of public trust. If people can't distinguish the real thing from fake, trust in the media and trustworthy sources will get worse. Lamb referenced a 2024 Reuters Institute report showing that global trust in digital platform news has fallen to the lowest point in a decade – primarily due to deepfakes.
According to Thang, “We're no longer talking about the risks of fake content. That's a real reality.” He urged the public to raise their awareness and adopt protective actions, including understanding how AI works and how they coexist safely.
Both experts learn to review content, find manufactured media, restrict the sharing of personal information online, and report fake or harmful content before acting on it. “Knowledge and vigilance allow individuals to protect themselves and contribute to a safer digital space in the age of AI,” Lamb said.
Duram