A breakthrough AI tool that can make fake videos has experts worried about a new generation of misinformation.
San Francisco company Luma has released its "Dream Machine" that lets anyone generate realistic video clips from a short prompt or image.
Central Auckland resident James Leech thought the results were surprisingly convincing.
"If you hadn't told me it was fake, I wouldn't have thought so," he said.
"I do worry about what people will fall for on social media. If you look at things that have happened like Trump and Brexit, the amount of people who were influenced by what they believed to be the truth on social media... if things like [Dream Machine] are out there it's just going to make it worse."
Another resident Poppy Jones said it was easy to be fooled, if you didn't know what to look for.
"There's very clear indications [that it's AI] but I feel like a lot of people would easily fall for them, especially if they're not tuned into how to identify AI videos," she said.
"My mother's been like 'wow look at this video of this cool thing' and I'm like 'that's so clearly AI if you look at it [closer]'."
But she said the technology was improving rapidly and artificial videos were becoming harder to identify.
"The issue is [the AI] keeps getting better, politically you could do anything... Chris Luxon's face is incredibly out there in the public, there's videos of him speaking and doing things," she said.
"You could make him do anything with AI, and that's pretty dangerous for a political figure."
Although OpenAI announced a similar generative model last year, Luma's Dream Machine is the first to be released publicly.
Victoria University computer science lecturer Andrew Lensen said it was a massive step.
"It's really impressive technology, and it's quite notable because its the first one to be made freely available," he said.
"We saw OpenAI's latest Sora model, they held it back from public access because they were concerned about it. So seeing this one from Luma is quite interesting and raises some challenges."
He said the technology had come a long way in recent years.
"It wasn't so long ago we were looking at really weird looking videos, really unrealistic scenes, whereas now its increasingly challenging to spot something that's AI generated," Dr Lensen said.
But he feared it could be dangerous.
"Disinformation or misinformation... especially on social media where we already see echo chambers and a lot of fake news, that sort of attack on information [brings] massive challenges in what we trust as a society," he said.
"It brings a lot of things into question."
Author and disinformation researcher Byron Clark said bad actors could use Luma's AI in all sorts of ways.
"People can generate videos of politicians and say 'this is a video just discovered from years ago' and it could be completely false," he said.
"They could generate videos from warzones, images of natural disasters... all sorts of things could happen."
As misinformation continued to dominate social media, Clark said it had become more difficult to trust the truth.
"The more of these AI-generated videos we have out there the harder it's going to be to tell what is real and what is false," he said.
"So I think another risk is we're going to have real photos and videos being accused of being AI-generated, which is another form of disinformation."
Dr Lensen said there were some benefits to the groundbreaking technology, but he thought the cons outweighed the pros.
"Some people talk about things like historical reanimations... potential medical applications around visualising cancers and so on," he explained.
"But a lot of these positive applications, I think, are a bit further off whereas the negative consequences are a lot more obvious and immediate."
He said it was vital for New Zealanders to have a healthy amount of scepticism online.
"Being sceptical of everything you read and everything you see is an increasingly important skill, and that's going to become even more important."
Examples of short AI videos created through Luma: