9:10 am today

The good, bad and ugly of AI for news

From Mediawatch, 9:10 am today
Art Min demonstrates TrueMedia's AI-powered tool for detecting AI 'deepfakes.'

Art Min demonstrates TrueMedia's AI-powered tool for detecting AI 'deepfakes.' Photo: RNZ Mediawatch

The New Zealand Herald copped criticism for using AI to create editorials recently, but it still wants journalists to keep using it to make their work better and more efficient. The chair of the ABC has also urged staff there to embrace AI for news. Mediawatch hears how AI has been deployed for news elsewhere in the Asia Pacific - and how AI-created fake content is threatening to undermine the news.              

In June, Dr Michelle Dickinson - aka Nanogirl - put artificial intelligence application Chat GPT 4.0 to the task by asking it how many times the letter ‘r’ occurs in ‘strawberry.’ 

Even after the unimpressed Dr Dickinson identified all three, ChatGPT still insisted she was wrong. 

“What I love is how confident it sounds. And I'm sure that if this is a more complex problem, I might actually doubt myself,” she said. 

After the Herald’s AI-infused editorials came to light last month, its current editor told journalists they should look for new ways to use AI for greater efficiency. 

The day before, the ABC puts its AI policy out in the open - including for journalism - in a live public forum called Futurecast

Angela Stengel, ABC’s Head of Digital Content & Innovation (L) and Kim Williams, chair of the ABC speaking at Futurecast 2024.

Angela Stengel, ABC’s Head of Digital Content & Innovation (L) and Kim Williams, chair of the ABC speaking at Futurecast 2024. Photo: screenshot from livestream

“Bury yourself in the technology and apply the technology - and understand the way in which it can be applied,” the ABC's chair Kim Williams said loftily. 

“In a simple application in journalism . . . you can use the technology to create a masterpiece of reductive presentation of all of the essential elements in a piece of information that people want to know about, but they don't want to spend an hour reading,” he said. 

Hear Mediawatch report on these issues in this week's show here 

“Before the end of the current year, checking will be done by AI  . . . by a separate set of programs, of material that is actually originated by AI,” said Williams - also a veteran of commercial media, entertainment and news companies - about the news agency Reuters.  

“Pretty scary when you think about it,” he added as an afterthought. 

Asked if AI was “a friend or a foe” for Australia's media, the chief technology officer of NewsCorp, Julian Delaney, told Futurecast that disclosing the use of AI was vital for public trust. 

“That, in a strange way, provides an incredible opportunity for a publisher to shine with trusted content that is of value. There might be a time where a news site is stamped: ‘No AI.’ Or their news site might say: “Completely done by AI.” I don't know  . . . but I do think it's a friend,” he said.  

Where is all this heading? 

Newsrooms in New Zealand are already using AI to cut stories down to size and change the grammar and the vocabulary.  

Elsewhere in the Asia Pacific, some newsrooms are already using AI in ways no one is here - yet. 

Some are also using it to fight back against the fake stuff undermining news and journalism. 

“My Hong Kong newsroom bans generative AI internally. We bar AI training bots from scraping us because I worry about hallucinations, plagiarism and its lack of attribution,” Tom Grundy from the independent online outlet Hong Kong Free Press said at The Future of Facts, a recent international media conference in the Philippines attended by several New Zealand journalists, including Mediawatch, with the assistance of the NZ Asia Foundation.  

“There's no remuneration for gobbling up our archive, and it can't recognise bad-faith content and propaganda. Mostly, I worry about accuracy - and not the big, obvious, stupid stuff but small, nuanced errors that will get echoed through multiple generations of AI unnoticed.

“Is it not advisable for newsrooms to hold out on using AI  . . . .given the risk people may mistrust AI and return to and value news that remained human-powered?” he asked. 

Good question. 

But former Google News Lab boss Irene Jay Liu said it's a bit late for that, because the online tech everyone uses is AI-powered as well. 

“If you are still allowing indexing for Google . . . you are allowing Google to use your content for their generative AI.

"AI overviews - formerly known as ‘search generative experience’ - is at the top in search. You cannot block it unless you block indexing and every newsroom should know this,” she said. 

Charlie Beckett is the director of the JournalismAI project at the London School of Economics, which helps newsrooms around the world adopt AI safely. But to what end? 

“We're going through a big election year and I'm seeing some brilliant uses of generative AI to monitor what politicians are saying  . . . during that election process. I see this as a way to free up resources so that you can do more ‘human’ journalism that's going to stand out in a world where a lot of routine content will be created by AI,” he said. 

“Above all, it's (about) getting out there and reporting in the real world, because too many journalists are glued to their screens and social media, and they don't get out and witness things for themselves. And those are the things that AI can't do.” 

The Conversation is a free service founded by universities to bring the wisdom of academics to a wider audience online. 

Its head of audience insights Khalil Cassamilly, based in Mauritius, uses AI to rejig The Conversation’s content, saying readers have found it hard-to-reach in the past. 

“Lots of people find a lot of value in reading fairly long articles, but increasingly other people are finding value in getting the same information in different formats.” 

“That could be Smart Brevity where we give people the information very quickly with different types of video and audio. Really, the AI is just there to create that based on the journalism that we'd already produced,” he told Mediawatch.

He cited Indonesia’s election in June. 

“One of the things we kept hearing from younger audiences in Indonesia was the news coverage was not really targeted at them. (They) want more context so by us using AI to repurpose journalism via formats that would appeal to that younger audience, they actually got the information.” 

“The output from the AI is fact-checked and goes through the same editorial process. We have full control.”

Cassamilly even provocatively told the conference news media aren't always good enough at supplying high-quality information and AI can amplify that. 

“As an industry, we can do much, much better. I think we've done some really bad stuff, to be perfectly frank, in the way, for example, we cover elections.

"It's not a surprise when we talk to people outside of the industry, that more often than not they have a negative view of journalists. That's quite sad, but it's definitely coming from some point of truth somewhere,” he said. 

“The difference between the news organisations and every other content producer out there is the burden on us is much higher. We should do better. We should do it more.”

Rappler logo

Rappler logo Photo: supplied

Like The Conversation, the leading online-only news outlet in the Philippines - Rappler - uses AI to reformat and summarise for younger people. 

Rappler’s reporting of the excesses of the former government of Rodrigo Duterte made it - and its now famous founder Maria Ressa - targets of sanctions, threats and harassment. 

Ressa won the Nobel Peace Prize in 2021 for standing up to this and Rappler used technology to expose the harassment and to fight back. 

AI is at the heart of its Politics Knowledge Graph mapping connections in Philippine society and politics, and thousands of candidates for the Philippines elections in 2022. 

“We used the data that we've collected over the years, and we used AI to help us generate around 50,000 profiles,” Rappler's head of data and innovation told Mediawatch.  

“We didn't have the capacity to write all the profiles for all 50,000 with our manpower.”

Can he be sure there are not important errors in them?

“We didn't just use ChatGPT to generate profiles for everyone and then serve it up directly to the audience. We used its capacity to analyse large sets of data. It hallucinates when, for example, you make it follow a template and there are missing fields. That's where it makes things up.

“A significant amount of time was spent on human reviews . . . and spot checks to make sure it's following the templates that you've set up. And of course it is disclosed to the audience that we use ChatGPT and there’s feedback mechanism so they can contact us if they do spot anything wrong,” Don Kevin Hapal told Mediawatch

Irene Jay Liu, Regional Director, Asia & the Pacific at the International Fund for Public Interest Media (L) and Don Kevin Hapal, Head of Data and Innovation at Rappler.

Irene Jay Liu, Regional Director, Asia & the Pacific at the International Fund for Public Interest Media (L) and Don Kevin Hapal, Head of Data and Innovation at Rappler. Photo: RNZ Mediawatch

“We don't use AI to replace what (journalists) are really good at doing. We're just using AI for the things that we wouldn't have done or wouldn't have been able to do - or that our people didn't want to do. That's the low-hanging fruit,” he said. 

“They like to be able to see their news within the platforms that they use - watching newscasts on TikTok, on Facebook. They have a particular preference for short-form content, but that's a very, very boring task to ask journalists to do - and we didn't want to make them feel like we're trying to turn them into influencers or content creators.” 

“There's a lot of pressure for newsrooms to use AI just for the sake of it. I don't think that's a good starting point. I think they should do their own audience research and take a look at whether or not a specific AI tool could provide a solution,” said Don Kevin Hapal. 

The bad stuff by bad people 

Whereas Rappler has adopted AI early and heavily in the Philippines, so have the makers of malicious and misleading stuff. 

To show just how simple it is, AI expert Art Min made deep-fake images of his own fellow panellists at the East West Center conference within seconds.

Fake content like that looks convincing enough to be watched and shared within seconds, he said. 

Impulsive people might act on what they've seen long before any critical thinking or debunking takes place. 

Think of the angry rioters in the UK recently, shouting slogans and online hashtags based on false claims from influencers who knew the claims were fake. 

Or a fake video of Kamala Harris which Elon Musk circulated to his 200 million followers on his own platform.

Min’s Seattle-based TrueMedia.org uses the same AI technology to find and flag fake images. 

In his address to the Future of Facts conference in Manila, the secretary of foreign affairs for the Philippines, Enrique Manalo, cited “myriad attempts at misinformation and the peddling of false narratives” inflaming the Philippines’ territorial dispute with China.  

The two nations’ vessels have clashed in the West Philippine Sea - aka the South China Sea - recently, raising fears that this could boil over into armed conflict. 

That was not helped by a faked recording that went viral recently, purporting to be the Philippines President Marcos telling his personnel to fight back.

Journalists have been deep-faked there, too. 

Well-known Philippines TV news broadcaster Ruth Cabal raised the alarm earlier this year when she and her newscast were impersonated in an AI-driven scam.

“That's when you realise how serious it could be, and that you're helpless. People should trust us with information, but people who are not really tech-savvy or not informed about AI are more vulnerable,” she told Mediawatch.

Media in the lead? 

Dominic Ligot, CEO of CyrroLytics and and member of the international expert panel on AI safety.

Dominic Ligot, CEO of CyrroLytics and and member of the international expert panel on AI safety. Photo:

“Journalists . . . should certainly be on top of it. We should be the ones shredding the use of it as much as we can, but cognisant of the flaws and limitations of these tools,”  Dominic Ligot, CEO of Manila-based data company CirroLytix told Mediawatch

“They are not databases. They're only meant to write human-looking sentences and are trained on relatively outdated information. It should never really be used as a source of truth,” said Ligot, who was also on the international panel of experts formed for last year's AI Safety Summit held at Bletchley Park in the UK. 

He told the East West Center media conference far too few people were working on the safety of AI.

“Cars had no seat belts for decades, and [they] were seen as an unnecessary cost. Volvo's innovation dropped fatalities  . . . and eventually became mandatory. There could be a similar move on AI very soon. Those things need to be highlighted and journalists are the best placed to put them up. People will realise we need to find a way of imposing some seat belts on these tools.

“When you combine that with social media . . . the algorithms were designed to segment populations as a marketing tool. That's a problem social media hasn't cracked - and now you have an automated way of producing all of that information." 

“The public has been led by OpenAI and other companies to use it like Google search . . . like the ultimate information-gathering tool. We need to balance that - and journalists should report how these tools don't seem to work,” he said. 

“I'm not saying the technology won't improve. I think eventually it has to, but the way they just work today isn't what we think is possible." 

If a newsroom’s editor knows ChatGPT can't spell ‘strawberry’ and refuses to use it at all, is that a mistake? 

“Don't use it as a source of information or facts. But you can certainly use it to format the article, to check grammar or rephrase the article. You can ask the chatbot to check whether your article can be misunderstood. You do get an interesting insights. AI is perfectly suited for that,” Ligot told Mediawatch

“But astute politicians know how to take advantage of a kind of ‘digitally brainwashed’ culture. That's something journalists should be looking at and also hold technology companies to account.” 

“Other parties will be too busy talking about one thing or the other. Journalism is all about giving that balanced view,” he told Mediawatch

Earlier this month Reuters reported OpenAI is working on a new approach to “advanced reasoning capabilities”. 

The code name for this previously top secret program, according to Reuters’ source? Strawberry.

Mediawatch attended the East West Center's 2024 ‘The Future of Facts' conference in Manila with the assistance of the NZ Asia Foundation.