A screenshot of an AI video created through Luma.
photograph: Ruma
A groundbreaking AI tool that can create fake videos has experts worried about a new generation of misinformation.
San Francisco company Luma has released the “Dream Machine,” which allows anyone to generate realistic video clips from short prompts and images.
James Leach, who lives in central Auckland, found the results surprisingly convincing.
“If they hadn't told me it was fake, I wouldn't have thought so,” he said.
“I worry about what people will be fooled by on social media. You only need to look at events like Trump and Brexit to see how many people are influenced by what they believe to be true on social media. [Dream Machine] Being there will only make things worse.”
Another resident, Poppy Jones, said it was easy to be fooled if you didn't know what to look out for.
“There are very clear indications [that it’s AI] “But I feel like a lot of people are easily fooled, especially if they don't know how to spot an AI video,” she said.
“My mom was like, 'Wow, check out this video of this cool thing,' and I was like, 'Look at this, it's clearly AI.' [closer]'.
But technology is advancing rapidly, making it harder to spot artificial videos, she said.
“The problem is [the AI] “Things are getting better and better and politically anything is possible. Kris Luxon's face is incredibly out there and there are videos of him speaking and acting,” she said.
“With AI you can make him do anything. That's pretty dangerous for a politician.”
OpenAI released a similar generative model last year, but this is the first time Luma's Dream Machine has been made public.
Victoria University computer science lecturer Andrew Lensen said it was a big step.
“This is a really cool technology and it's quite remarkable because it's the first time that it's been made available for free,” he said.
“We looked at OpenAI's latest Sora model, but they held back from releasing it to the public due to concerns, so it's really interesting to see this model from Luma, and it raises some challenges.”
He said the technology has come a long way in recent years.
“Not so long ago we were seeing really strange-looking videos, really unrealistic scenes, and now it's becoming harder and harder to spot AI-generated stuff,” Dr Rensen said.
But he feared it might be dangerous.
“Disinformation and misinformation… We're already seeing echo chambers and fake news, especially on social media, and a kind of attack on information. [brings] “It's a big challenge in terms of what we trust as a society,” he said.
“A lot of things are in question.”
Byron Clark, an author and disinformation researcher, said bad actors could use Luma's AI in a variety of ways.
“People can generate videos of politicians and say, 'Here's a video that was discovered years ago,' but it could be completely false,” he said.
“We could potentially generate video from a battlefield, images from a natural disaster… anything can happen.”
As misinformation continues to spread on social media, Clark said it's becoming harder to believe the truth.
“The more AI-generated videos there are, the harder it will be to distinguish what is true and what is false,” he said.
“So I think another risk is that real photos and videos will be accused of being generated by AI, which is another form of disinformation.”
Dr. Lensen said that while there are some advantages to this groundbreaking technology, he believes the disadvantages outweigh the benefits.
“Some people are talking about bringing historical figures back to life; potential medical applications for things like visualizing cancer,” he explained.
“But I think a lot of these positive applications are a little further down the line, whereas the negative consequences are much more obvious and immediate.”
He said it was vital New Zealanders maintained a healthy degree of scepticism online.
“Being skeptical of everything you read and everything you watch is an increasingly important skill and will become even more important in the future.”
An example of a short AI video created through Luma: