Māori and Pasifika young people described racism, stereotyping and harmful narratives amplified by algorithms. Neurodiverse children were often more vulnerable to emotional dysregulation and struggled to disengage from platforms engineered for continuous engagement.
Young people already experiencing anxiety, bullying or instability at home told the committee of MPs that social media often intensified their distress, turning everyday worries into crises. While the committee has not yet drawn any final conclusions, the evidence it received makes clear that online harm frequently compounds existing inequities rather than falling evenly across all young people.
The committee also examined the role that platform design plays in magnifying harm. Infinite scroll, recommendation algorithms, appearance filters and notification loops were repeatedly cited as features that pull young people into patterns of comparison, compulsion and self-doubt. These design choices, the committee noted, are not neutral. They shape behaviour and emotional responses in ways that young people themselves cannot reasonably be expected to navigate alone.
In this context, online harm is not simply the result of “bad choices” or insufficient parental supervision; it is a predictable outcome of environments designed without children’s developmental needs in mind.
As a result, one National MP on the committee publicly described online harm as a public health issue, arguing that the scale and nature of what the inquiry heard now demand co-ordinated action from the Government, industry and society, not just more resilient families.
The report makes clear that New Zealand’s regulatory system is fragmented and outdated. No single agency is responsible for overseeing online safety, and there is no mechanism to govern platform design, algorithms or age enforcement. Many parents and teachers told the committee they were not only overwhelmed but were operating without any consistent national guidance or structural support. Schools described pastoral-care systems strained by online conflict and sleep-deprived students, while parents said they felt abandoned by a system that expects them to manage forces far beyond their control.
In the committee’s words, collective action is overdue. This is the backdrop against which Australia has acted, and what many of us have been warning about for years.
Australia’s decisive moment
This week, Australia moved decisively to ban social media accounts for under-16s. Once you understand the scale of harm, the developmental vulnerability of teenagers, the real stories unfolding in homes and schools, the move begins to feel like the next logical step in a long line of public health protections. With the report findings laid bare, those who say age restriction is abrupt or heavy-handed are shown to be completely out of touch with the reality of the problem.
For many Australian parents, teachers and community leaders, this step reflects a long-needed acknowledgment that the digital environment has moved beyond what families, communities and schools can reasonably manage on their own. And the reaction across Australia has matched the significance of the moment: relief, gratitude and a collective exhale from parents who have felt outmatched for years.
Against that backdrop, New Zealand’s position becomes sharper. When you place Australia’s action alongside the evidence emerging from our own inquiry and public-health research, and when you listen to what teenagers are saying, the response shifts. What once seemed bold now aligns with how we have historically approached other societal risks, from smoke-free restaurants to raising the driving age and restricting tobacco.
What young people are telling us
The research of the Public Health Communication Centre (PHCC) makes the situation even clearer. Its survey of more than 500 New Zealand teenagers aged 13-17 found that almost every young person used social media, typically beginning between ages 10 and 13. Daily usage was high: while the average was around 2.5 hours, nearly one in three teens reported spending five hours or more a day on platforms designed to keep them engaged.
One in five met clinical criteria for “problematic use”, a pattern that closely resembles behavioural addiction. Almost half said social media significantly disrupted their schoolwork, sleep, family time and friendships. Many described feeling trapped, anxious, pressured to perform, constantly comparing themselves and feeling somewhat invisible when they did not post. Almost half said they started too young; remarkably, nearly four in 10 said they wished social media had never been invented.
These findings tell a story that does not match the common narrative often amplified by Big Tech of social media as harmless fun. Instead, they reveal a generation struggling to navigate a digital world they were thrown into long before their brains were developmentally ready to cope.
This isn’t just social disruption, it’s developmental risk. The medical, neurological and psychological evidence presented to the inquiry showed that adolescence is a time of intense sensitivity. The prefrontal cortex, responsible for self-control, emotional regulation and long-term planning, is still maturing well into early adulthood, while the part of the brain tuned to reward and social belonging is hyperactive.
In such a context, handing children smartphones and unrestricted access to social media is akin to giving them access to powerful stimulants at a vulnerable moment. The result is predictable: anxiety, poor sleep, low self-esteem, addictive behavioural patterns and impaired concentration.
I was reminded of this again just last week. Over dinner with a friend’s 16-year-old daughter, she told me without hesitation that she supports Australia’s new minimum age and wished it had been in place years earlier in New Zealand. By 12, many of her peers were already deeply enmeshed in the social media world, constantly posting, comparing, worrying whether their next like or message would make them belong.
She described it as exhausting. “It would’ve been easier if none of us had it,” she said. Her words captured the core truth: no teenager should be expected to resist alone when the system is designed against them. Boundaries only work when they apply to all.
The first wave of Australian reactions
As the ban took effect this week, the atmosphere in many Australian homes changed overnight. Families who have long struggled to enforce screen-time rules described a new sense of relief. One mother told a national broadcaster she finally felt “backed”, that someone, somewhere, had recognised the burden her children were under.
Principals and teachers spoke of hope that classrooms might feel calmer again, that genuine downtime might return and that homework might no longer be subverted by late-night doom-scrolling. Mental health advocates called it a “reset moment”, a rare instance of public policy catching up with psychological and social reality.
Not every reaction was celebratory. Some teenagers, especially those approaching 16, voiced uncertainty or resentment. They worried about losing social connections or being treated like children. Some parents expressed concern about VPNs, shared devices or unintended consequences.
But still, the dominant tone from those closest to the issue, parents, teachers, mental health workers and many teens, has been one of tentative hope: a belief that maybe the pressure might ease, maybe peer comparison would slow, maybe youthful sleep cycles might recover.
New Zealand steps into the debate
Earlier this week, on the Duncan Garner: Editor in Chief podcast, Education Minister Erica Stanford confirmed that the coalition Government intends to legislate a similar social media ban for under-16s before the next election. This commitment has been echoed at the highest level of Government. On the One Young Mind podcast, Prime Minister Christopher Luxon reiterated his determination to act on youth online safety, saying he would deliver meaningful change “or die trying”.
That kind of public commitment matters. It shows the Government recognises the urgency. It acknowledges that the issue is not simply a matter of personal parenting or private choice, but one of public wellbeing and childhood safety. It brings New Zealand into the same moment Australia is now living, the moment where the cost of allowing social media to run unchecked becomes too high to ignore.
What Australia has actually done
Understanding Australia’s approach is essential, not least because it has been mischaracterised by some. What makes its policy a useful model is that it is nuanced, not heavy-handed. The law targets platforms whose main purpose is public social interaction: TikTok, Instagram, YouTube, Snapchat, Reddit and the like. It does not attempt to ban all digital communication. Messaging apps and multiplayer games sit outside its scope.
To assess age, the law uses “successive validation”: behavioural signals, anonymised age-estimation technology, liveness checks and optional ID verification. Crucially, verification data cannot be used for advertising or profiling, and oversight rests with the Privacy Commissioner. There is no compulsory digital identity. Digital ID is simply one of several tools available, not a requirement and not the foundation of the system. Its inclusion prevents misinterpretation while ensuring the policy cannot be reframed as a broader identity project.
Like all regulations, it is not a perfect solution; no law ever is. But it is thoughtful and proportionate, balancing privacy, practicality and safety, and it will no doubt evolve as the environment changes.
Some critics say regulation is pointless, that “we can’t legislate our way out of this”. That logic doesn’t hold. We don’t abandon road rules because accidents still happen. We don’t scrap alcohol laws because misuse continues. Legislation is a public health tool we use to reduce harm, set norms and draw boundaries around what society considers acceptable. Social media regulation belongs in the same category.
And while social media undeniably brings creativity, connection and community, we must remember this simple truth: Australia has introduced a minimum age, not a permanent ban. Any positive aspects can wait until young people are developmentally ready to navigate them safely. We already accept this principle for driving, drinking, gambling and a range of other age-restricted activities. This is no different.
The road ahead
Australia’s move is part of a wider shift to build a safer digital environment: default blurring of pornography and violent content, restrictions on harmful AI material for minors, mandatory redirection to mental health support for suicide-related searches, and emergency powers to block violent or extremist content. It marks a systemic, rather than symbolic, shift.
New Zealand has now committed to acting. The select committee’s evidence shows clearly why that commitment is needed. Our current systems, fragmented, voluntary and under-resourced, are not fit for purpose. Parents cannot be expected to protect their kids from platforms engineered to exploit vulnerability. Schools cannot continue to shoulder the fallout alone. And young people themselves have said they started too young and wished adults had intervened sooner.
The question for New Zealand is no longer whether we will act, because that commitment has been made. The real question is how quickly we turn words into action, and whether we move at a pace that genuinely protects our young people. Australian teens are now safer than their Kiwi cousins, and the evidence here is impossible to ignore.
Catch up on the debates that dominated the week by signing up to our Opinion newsletter – a weekly round-up of our best commentary.