Google Veo 3: A creative breakthrough or crisis in journalism?

AI Video & Visuals


Released in May 2025 at Google's annual I/O Developer Conference, Google Veo 3 is Tech Giant's direct challenge to SORA, Microsoft-backed Openai's video generation model. Developed by Google DeepMind, this advanced model demonstrates a major leap in generation AI that promises high quality, realistic video creation from text or image prompts.

But in an age of misinformation and deepfakes, a tool like VEO 3, the ability to create realistic video and synchronous audio enhances the looming questions of journalism. Yes, it opens up new creative possibilities, but also brings serious challenges in terms of reliability, misuse and editorial management.

What is Google Veo?

VEO 3 promotes itself as a “state-of-the-art tool” that offers “unparalleled realism, audio integration and creative control.” It is offered at a high price per $249.99 per month under the AI ​​Ultra Plan and is currently available in the US and 71 other countries, excluding India, the EU and the UK. While ethical concerns loom, Google proposes VEO as a powerful resource for filmmakers, marketers and developers.

According to Google, VEO 3 can generate 4K videos with realistic physics, human representation and film style. Unlike many competitors, it also generates synchronized audio (dialogs, ambient noise, background music).

Read again | Who will be arrested when AI breaks the law? Bot or its manufacturer?

This model is designed to capture detailed scenes, moods and camera movements to precisely track complex prompts. Users can specify film technologies such as drone shots and close-ups to control framing, transitions and object movement. A feature called “components” allows the user to generate individual elements (such as characters and props) and combine them into a coherent scene. VEO can also extend the scene beyond frames, modify objects, and maintain visual consistency with shadows and spatial logic.

Google's websites include VEO examples, including marketing, social media, and enterprise application projects. Oscar-nominated film director Darren Aronofsky used it to create short films. Primitive soup. On social media, AI artists have released viral VEO clips Infludersatire featuring influencers at the end of the world.

VEO 3 is integrated into Google's AI Filmmaking Tool Flow, allowing for intuitive prompts. Enterprise Access is available via Vertex AI, but it is available to general users in supported countries via Google's Gemini chatbot.

The dilemma of journalism

Veo features raise alarms about potential misuse. It can promote deepfakes and false narrative creation, and further erode trust in online content. There are also broader concerns about the economic impact on creators, legal liability, and the need for stronger regulations.

The risk is not theoretical. As highlighted in June 2025 time Using an article entitled “Google's Veo 3 can make deepfakes of riots, election fraud, and conflict,” it was used to generate realistic footage of counterfeit events, like mobs torching temples and election officials with false captions designed to cause anxiety. Videos like this can have real-world consequences and can spread quickly.

Screen glove from a video depicting election fraud generated by time using VEO 3. Realistic footage, combined with false captions designed to cause anxiety, can spread rapidly with real results.

Grab the screen from a video depicting election fraud generated by time Using VEO 3, realistic footage of manufactured events, combined with false captions designed to cause anxiety, can spread rapidly with real results. | Photo Credit: By Special Arrangement

Cybersecurity threats are plausible alongside looming copyright issues, as if they are spoofing executives as stealing data. time VEO has been trained in copyrighted materials and reported that it may have exposed Google to lawsuits. Meanwhile, the Reddit Forum cites personal harm, such as students who were jailed after images generated by AI were accidentally attributed.

There is also a threat to your livelihood. AI-generated content replaces human creators, especially YouTubers and freelance editors, accelerating what is called “Dead Internet.”

To mitigate risk, Google claims that all VEO content contains an invisible Synthid watermark with visible transparency in most videos (but can be trimmed or modified). The SynthID detection tool is under test. While harmful or misleading prompts are blocked, nasty content is still appearing, highlighting the limits of guardrails.

What should the newsroom do?

Despite the risks, VEO presents an engaging opportunity for journalism. Particularly for visualising data, explanatory videos, reenacting historical events, or reporting undocumented narratives. It can help small newsrooms produce professional quality videos quickly and affordably, even for breaking news.

Responsibly used VEO can improve storytelling. For example, they incorporate disaster witness accounts into visual narratives, or convert dry data into cinematic sequences. Digital First Outlets in particular make it more feasible to prototype ideas before committing to full production.

However, Veo's strengths are also dangerous. The ability to create compelling footage of events that never happened can destabilize the information ecosystem. When deepfakes flood the news cycle, the actual footage can be unreliable. The visible watermarks are easily removed, and Google's SynthID detectors remain limited in scope and provide an indoor for malicious actors to operate undetected.

To maintain public trust, newsrooms must clearly disclose content when it is generated by AI. However, the temptation to pass the forged visuals as reality is stronger, especially in competitive, high-pressure news environments. Additionally, the AI ​​output reflects training data, which can cause bias to creep up and requires rigorous editing scrutiny.

There is also a human cost. VEO automation can eliminate the role of video editors, animators and field videographers, especially in resource-bound newsrooms. Journalists may need to learn quick engineering and AI verification just to be floating.

Also Read | AI is changing work, privacy and power. What's coming next?

Legal landscapes are vague. Accountability is unknown when an outlet publishes videos generated to AI causing harm. Ownership of content generated by Veo is also opaque, raising potential copyright disputes.

And there is a burden to verify. Fact-checkers face a massive flood of synthetic content, but reporters may find footage of themselves being treated with suspicion. As the Pew Research Center reported in 2024, three in five American adults were already feeling uneasy about AI in the newsroom.

Important joints

Just as Veo and tools become cheaper and widely available, their impact on journalism will deepen. The challenge is not merely resisting the tide, but rather adapting ethically, strategically and urgently.

Experts say newsrooms should invest in training, transparency and detection tools to enjoy the creative rewards of AI while protecting reliability. Innovation and trust must evolve together. If journalism is to get through this next phase of chaos, they say, it must keep your eyes wide open.

(Study by Abhinav Chakraborty)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *