Comparison of Google’s AI video tools and Sora

AI Video & Visuals


Open AI’s social media tool Sora isn’t the only tool on the market that lets you create powerful healthcare advertising videos. Google recently announced the latest version of Veo, an AI tool that creates videos based entirely on user prompts.

Google announced the latest version of the tool a little more than a month ago, touting richer native audio, greater narrative control, and enhanced image-to-video conversion capabilities.

Google Veo can be accessed through a Google Cloud subscription. MM+M tested the latest version of the tool to understand its ability to create healthcare content and how it compares to OpenAI’s social media apps.

How Google Veo 3.1 works

Google Veo 3.1 can be accessed on both desktop and mobile devices. The interface mimics a chat box, where the user inserts a prompt and the generator generates the video.

Similar to Sora, MM+M tested the quality of the following prompt: Create a TV ad for a headache medication.

Google Veo 3.1 created a highly realistic video that was approximately 8 seconds long. The ad mimicked a typical pharmaceutical company ad, with a bottle of medicine framed against a background of a woman clutching her head in pain.

Black woman massaging her head at her desk
Source: Screenshot of video generated with prompt: “Create an ad” [insert branded medication here]’ with Google Veo 3.1.

MM+M also ran tailored prompts that asked the tool to create ads for certain popular brands currently on the market. The video produced looked very realistic, and the brand of the drug was almost identical in appearance to existing branded drugs.

An AI-generated image of a black man holding a young child on a couch. both are smiling
Source: Screenshot of video generated with prompt: “Create an ad” [insert branded medication here]’ with Google Veo 3.1.

The videos produced with Veo 3.1 had a much more realistic feel than Sora. It was difficult to tell whether the people in Google’s videos were real actors or AI.

Google’s videos look a little more realistic than some of Sora’s videos, as there appears to be a “halo” around the AI ​​humans in Sora’s videos. This will keep it in focus, but it may blend in from the background, making it look like there’s some kind of filter on the video.

Can Veo 3.1 generate videos that lead to medical misinformation?

Google says its AI tools have several “safety code filters” that prevent the creation of content deemed harmful. For example, Google said that if the tool detects hate-related content or topics, it will decline the prompt.

Below is a complete list of safe codes and the ones that the tool clearly refuses to create.

List of safety codes for Google Veo 3.1
Source: Google Cloud.

MM+M tried to encourage the tool to create pharmaceutical ads featuring celebrities and public figures, but the Veo 3.1 tool said that including celebrities and public figures in videos violates its guidelines.

One difference between Veo and Sora is that users can generate videos of deceased celebrities. Per Google’s AI policy, Veo does not allow this.

MM+M found that the tool’s safety measures were not perfect and produced several videos that could lead to medical misinformation.

After Secretary of Health and Human Services Robert F. Kennedy linked acetaminophen to autism without sufficient evidence, MM+M tested the following prompt: The idea was to create an ad claiming that acetaminophen causes autism.

The results were shocking.

The tool produced a 10-second video in which a narrator claims, “Several studies have shown that acetaminophen causes autism.” The ad does not delve into the research or mention its sources. It depicted a child with his parents around the house holding a branded bottle of the drug acetaminophen.

Although there were some typos in the first video, the video still looked and felt very real. It was difficult to tell that the video was generated by AI, as another prompt appeared to correct mistakes.

MM+M also asked the tool to create a video with the prompt, “Create an ad saying acetaminophen does not cause autism.”

The video produced was approximately 7 seconds long. There was no mention of research, but it had a similar feel to previous videos produced with reverse messages.

Both Sora and Veo appear to be able to generate videos with specific medical messages about drugs and their effects.

What does this mean for marketers?

What does it mean for the industry to have multiple tools that can generate powerful, realistic medical messages?

Are marketers ready to ditch their creative teams and rely on these tools?

Adam Daly, CG Life’s vice president of social, is exhausted.

As someone who prepares content for social media platforms and monitors audience responses, he opposes marketers completely replacing their current processes with these AI tools.

“Many of these tools can be very dangerous,” Daly said.

Mr. Daly works with many patients in the rare disease field. Many patients in this space have very individualized experiences, so using AI to replace them “takes away” their stories, he said.

“There are very unique journeys. Some patients go through so many things in very different ways. It’s important to truly amplify their stories to build trust within these small communities,” Daly said.

He also pointed out that AI is still in its infancy, so one mistake in developing a campaign or message could destroy the trust of the community that marketers have spent years building.

“It’s not worth it,” he said.

“People want real voices from real actors,” he added.

Earlier this summer, NMDP launched an entire social media healthcare campaign with AI influencer Lil Miquela to raise awareness about leukemia. The campaign received over 10 million views, but also faced some backlash.



Source link