99 percent of terrorist material removed automatically, Meta says

9:15 am on 25 June 2024
France, Paris, 2022-11-09, Facebook s parent company META Group, founder Mark Zuckerberg, announced on Wednesday 9 November that he was laying off 11,000 employees, Photography by Serge Tenani / Hans Lucas. France, Paris, 2022-11-09, Illustration du groupe META maison mere de Facebook, le fondateur Mark Zuckerberg a annonce mercredi 9 novembre le licenciement de 11 000 salaries, Photographie par Serge Tenani / Hans Lucas. (Photo by Serge Tenani / Hans Lucas / Hans Lucas via AFP)

Photo: Serge Tenani / Hans Lucas via AFP

Social media giant Meta says it now picks up more than 99 percent of terrorist material posted on its platforms.

The importance of limiting the spread of harmful online content was highlighted in the aftermath of the Christchurch mosque shootings in 2019, which were live-streamed.

Meta owns Facebook, Instagram, WhatsApp and Messenger.

Its public policy manager of content Manu Gummi told the East-West Center international media conference in Manila that automated systems and human moderators can quickly flag content that may be terrorist-related.

"We remove more than 99 percent of this via automation, because our systems over a period of many years have gotten so good and accurate at removing that content."

Meta aimed to reduce but not always remove other harmful content, such as misinformation.

"We'd lower their distribution to make sure that fewer people are seeing it and it's not widely being shared or going viral. And we inform - we give people context so that they can decide what they can read and trust to share.

"Without just taking the content out, we actually tell them that this is false so that they can identify that information, they can inform themselves, and they can stay educated about what kind of misinformation you're seeing out there."

The conference The Future of Facts is looking at the threat of artificial intelligence, disinformation and falling levels of trust in media.

Meta was taking steps on how to alert users that content was produced by AI, Gummi said.

"We rely on people self-disclosing and making sure that they tell us that this is something that's generated with AI, or we are also working on developing industry-wide standards to identify AI-generated content and making sure people disclose that themselves."

Meta was halting its Crowd Tangle analytics tool from August, but has faced criticism for the decision because of fears it will reduce transparency and the ability to track viral content, misinformation and disinformation.

"Deprecation of Crowd Tangle is not necessarily meaning that we are not going to have tools that will disrupt manipulating networks and and so on," Gummi said. "It's just that it's been a business decision about where Meta invests its resources.

"And certain resources where we've not been able to reap the benefits, or where the users have not shown interest in, we have had to make that business decision to deprecate those.

"But that does not mean that we don't have other resources. We do have other tools to disrupt this kind of behaviour that we are developing, or are also already in progress."

Gill Bonnett travelled to the Philippines with assistance from the Asia New Zealand Foundation

Get the RNZ app

for ad-free news and current affairs