TikTok says a blunt ban would not work, and may send young people to places with less safety. File photo. Photo: JONATHAN RAA
Tech giants TikTok and Meta have cautioned against a proposed social media ban for under 16 year olds, at a Parliament inquiry into social media harm.
ACT MP Parmjeet Parmar formally requested an inquiry by Parliament's Education and Workforce Committee into social media harm, after ACT opposed the government's plan to ban children under 16 from social media.
National is instead submitting a member's bill to legislate a ban.
The "Inquiry into the harm young New Zealanders encounter online, and the roles that government, business, and society should play in addressing those harms" heard from submitters on Monday.
While the day-long session canvassed broader concerns about social media harm, the proposed ban was a dominant topic of discussion.
TikTok's public policy lead for Australia and New Zealand Ella Woods-Joyce told the committee a blunt ban would not work, and may send young people to places with less safety, teams, and tools like larger platforms.
She said TikTok had a minimum age requirement of 13, and users under 16 also had private accounts by default, which meant their content could not be viewed in the general 'for you' feed.
About 280,000 accounts of people suspected to be under 13 in New Zealand had been removed.
National MP Carl Bates challenged her on that, saying he had visited primary schools in his electorate, and students told him they used TikTok.
"I just want to be very clear that that work is ongoing. We are always investing to improve that process to make sure that we keep getting better and better at doing that," she replied.
RNZ recently reported on TikTok's algorithms, by setting up four teenage accounts on four phones on factory settings.
Woods-Joyce said the report showed TikTok's systems were working well.
"They looked for, but could not find, overtly harmful content on our platform. In fact, when they were searching for it, not only did they not find it, they were re-directed to services that offered support."
She acknowledged there was "always more work to be done" on safety.
Meta regional director of policy Mia Garlick told the committee that Meta wanted young people to have positive, age-appropriate, and safe experiences online.
"It's essential to our business, in many aspects, because people will only continue to use our services if they feel welcome and safe."
Meta had created teen accounts, which put under 16 year olds in private accounts by default and contain restrictions on content and time-of-use.
"Rather than locking teens out of digital platforms entirely, which can push them towards less regulated spaces, there's an opportunity for New Zealand to create a more balanced, effective, and enforceable solution," Garlick said.
Some submitters questioned the effectiveness of the proposed social media ban, while others called for stronger regulation of social media platforms and advertising.
Right-leaning British think-tank the Institute of Economic Affairs said the Online Safety Act, which started taking effect in the UK in July, had led to several unintended consequences.
Public policy fellow Matthew Lesh said it had led to a 1400 percent increase in VPN usage.
"We need to be more creative than these very top-heavy solutions, when it comes to dealing with the challenges we face online. I think efforts to address online harm should focus on empowering and resourcing law enforcement to tackle criminal behaviour, particularly using existing laws, rather than creating sweeping new regulatory regimes."
Lesh said digital literacy and effective safety tools were more effective at letting parents and children make their own decisions online.
The UK Free Speech Union, which is separate but affiliated to the NZ Free Speech Union, said a ban would be "onerous" and "quite illiberal".
"Online safety will never be universal because the Americans are not going to go for it. So anyone with a VPN getting into America is going to be able to step around this," said its chief legal counsel Bryn Harris.
The alternative, Harris said, was "firewall the entire country" like China.
The annual Ipsos Education Monitor has found 72 percent of people support a social media ban for children under 14 - a step further than what National is proposing.
Colm Gannon, from the International Centre for Missing and Exploited Children Australia, said age assurance could be done, pointing to tools like RealMe, and that some platforms and websites overseas had started restricting content if someone tried to access it through a VPN.
But he said the internet itself needed better regulation, with MBIE, Netsafe, and the Harmful Digital Communications Act representing a "fragmented" and "disjointed" approach.
"We can regulate the internet, we can regulate the safety of children. We just need to actually bring that regulation in line with international standards.
"It's not hard, because what happens is you push the liability to the service providers. The social media service providers have to be held accountable."
Gannon said "time was ticking" for the top technology companies, and they had "failed" to protect children.
Auckland University population health professor Antonia Lyons had lead research into the impact of social media on young people, and found while young people found it could be a positive space, they were constantly exposed to "unhealthy commodity marketing".
Thirty-five percent of survey respondents reported seeing vape marketing, and it was reported more by 14 to 17 year olds than 18 to 20 year olds.
"We know that it influences behaviour. We know that it influences the age at which young people start drinking, how much they drink, when they do drink. The same with vaping. So we have that evidence. What do we do about it? And that's something that all governments are around the world are grappling with," Lyons said.
"These platforms should be much more transparent. We should be able to know what marketing is happening and how the algorithms are working to go into the feeds of young people, who are most vulnerable."
In his submission, Privacy Commissioner Michael Webster said requiring steps for users to prove their age, and the collection and use of that personal information, raised privacy concerns.
Webster said age assurance could be approached in different ways, with different technologies.
Age verification could rely on a trusted document like a passport, age estimation relied on technologies that estimated age but had high error rates, and age inference relied on data mining.
"Overall, my view is that those privacy issues need to be fully understood and worked through in a transparent and consultative way before any such proposal is advanced further," he said.
"No one wants social media to become a place dominated by violence, by hate, by despair, but we cannot rely on hoping that that won't happen. Hope is simply not a viable option."
Webster called for appropriately targeted regulation, in conjunction with education, and it may pay to wait and see what works overseas first.
Australia's ban on social media for under-16's is due to take effect in December.
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.