16 Apr 2025

Former Facebook CEO Stephen Scheeler on why the site should be banned for children

12:13 pm on 16 April 2025

A former Facebook executive says social media should be banned for kids - because their brains aren't developed enough to deal with the site's "frightening" algorithms.

Stephen Scheeler, once the figurehead of Facebook in Australia and New Zealand, said on this week's episode of 30 with Guyon Espiner that he thought anyone under 16 was too young for social media.

"This is mentally challenging. And it's frightening. A human brain doesn't evolve enough and develop enough up until at least that age, that it can handle some of the challenges that the algorithms present," Scheeler said.

"As a parent myself, if you're a parent you'll know the challenges of social media with your kids. And leaving it to parents alone to figure it out, I just don't think it's fair.

Australia recently moved to ban children under 16 from using social media, after its parliament approved the world's strictest laws. The ban is yet to take effect, and how it will work is still unclear.

'Facebook failed'

Scheeler's role at Facebook was championing the platform to the Australasian market. He joined in the early 2010s, when it was still relatively new. Since then, the company has been accused of polarising modern society with its algorithms, amid multiple other scandals.

"It hasn't worked out quite as we planned," Scheeler said.

"I feel like I should have known more. Could I have done something? I was just a cog in the machine, but I can admit now I didn't realise, I didn't know, and I should have had more awareness of the bad things."

But Scheeler said no one focused on the downsides of Facebook.

"I almost cringe now, how I was speaking about the way social media was going to change the world for the better. We would just gloss over anything that was potentially negative."

Some of the most egregious aspects of Facebook's track record on user well-being relate to teenagers - particularly the revelations that the company downplayed internal research showing Instagram was toxic for teenage girls. Scheeler said the reasons why are obvious.

"We're naive to think that Facebook, or any company, works against its own best interest," he said. In that respect, Scheeler added, "Facebook failed. I don't think the moral line was firmly drawn enough inside the company, and it was because the profit motive… overrode everything."

A 2017 internal memo that revealed Facebook actively offered advertisers the ability to target teenage users showing signs of low self-esteem with beauty products.

That was, according to Scheeler, who resigned from his role that same year, "unacceptable." But he said again it comes down to profit.

"Social media companies don't have an interest in you spending less time on their platform. They have an interest in you spending more time, whatever age you are."

Former CEO of Facebook Australia and New Zealand, Stephen Scheeler sits down with Guyon Espiner for an interview as part of '30 with Guyon Espiner'.

Former CEO of Facebook Australia and New Zealand, Stephen Scheeler sits down with Guyon Espiner for an interview as part of '30 with Guyon Espiner'. Photo: RNZ / Cole Eastham-Farrelly

'I don't think Zuckerberg has the moral fibre to do and say the right things'

Scheeler gave an open assessment of Facebook CEO Mark Zuckerberg, and on the company's controversial decisions around pulling back on content moderation, in light of Donald Trump's return to the US presidency.

"My observation of Mark is, he's not a bad actor, he's not a mean-spirited person," he said. "But I don't think he has the moral fibre ... to do and say the right things. And I think to do and say the right thing at this moment was to not cave in to Trump."

"With the kind of power and influence that Mark has comes great responsibility. And I think at this moment, I feel like he's failed the test."

'There's a battle for your attention going on'

Scheeler said AI systems now dominate online life - systems he helped bring into the world.

"There's a battle for your attention going on," he said. "Facebook's one of those battling, and I think one of the problems we've got at the moment is we have AI ... ruling your attention in a way that you really don't understand or control."

That kind of design, he said, "is very addictive in its nature ... the question is, are you in control of your attention?"

The effects have the potential to be catastrophic - such as the United Nations' finding that Facebook played a "determining role" in the violence against Rohingya Muslims in Myanmar.

"I couldn't agree more," he said. "We can't even agree on facts ... and sometimes those facts can be used in different ways ... to inflame people, to think certain things."

Along with the view of how personalised, AI-led content curation can be a direct cause of real world harm, Scheeler alarmingly admitted there are no real practical ways to escape the algorithm's influence on what you read, watch and hear.

"All these systems can be kind of gamed or influenced in different ways. But I think it's very hard for individuals, to be honest with you, to figure that out and to do it sustainably. If you're using the internet for what you really want to use it for, which is live your life, doing all that stuff ... is just not going to be sustainable."

'Part of my penance'

Scheeler's new venture, Omniscient, is his attempt at redemption - a company using artificial intelligence to decode the brain and revolutionise the treatment of mental illness.

"As part of my penance for what I built for Facebook, what I want to do in my life is to build AI for human good," he said.

Omniscient is already in the market. "We've already got regulatory clearances in the US and Australia ... we focus on using AI to decode your brain. And things like mental illness ... essentially, just faults in your circuitry. We build tools that can find those faults."

But even here, he admitted, the same forces that shaped social media loom large. The risks of AI are enormous - and poorly understood.

Asked about the growing concern among experts that AI could eventually lead to human extinction - the so-called "P-Doom" [probability of doom] measure - Scheeler didn't dismiss it, but remains cautiously optimistic.

"You can take two of the godfathers [of AI]. One will say this is existentially threatening, and the other will say, there's nothing to worry about."

But he sees a moral imperative in trying to do it better this time. "We have an ethical mission for our company," he said. "We get ethical advisory from outside parties ... we're building into the medical system, which already is covered by ethics."

Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

Get the RNZ app

for ad-free news and current affairs