New Zealand News Beep
  • News Beep
  • New Zealand
  • Headlines
  • Business
  • Entertainment
  • Health
  • Science
  • Sports
  • Technology
New Zealand News Beep
New Zealand News Beep
  • News Beep
  • New Zealand
  • Headlines
  • Business
  • Entertainment
  • Health
  • Science
  • Sports
  • Technology
Internet regulation: What model is New Zealand really choosing? - David Harvey
NNew Zealand

Internet regulation: What model is New Zealand really choosing? – David Harvey

  • February 15, 2026

New Zealand, of course, has not pursued anything so crude. Nor is the internet here unregulated. The Classification Office, working with the Department of Internal Affairs, already monitors objectionable material and has powers to block certain content. In January 2026 it released a report entitled Online Exposure: Experiences of Extreme or Illegal Content in Aotearoa, based on a survey of adults’ encounters with extreme or potentially illegal online material. One of its key findings was that harmful content is under-reported, prompting the office to publish clearer guidance on how and where citizens can report it.

Yet being “upsetting” is not the legal threshold. Under the Harmful Digital Communications Act 2015 (HDCA), harm is defined as “serious emotional distress”. The act represents New Zealand’s existing regulatory posture: reactive, targeted and relatively restrained. It provides civil remedies for victims and criminal penalties where material is posted with intent to cause harm. Amendments have already brought intimate visual images within its scope, and a further proposal would extend those protections to sexualised deepfakes generated by AI.

Despite this, regulation has continued to expand by other means. In 2018, Internal Affairs launched a broad review of online regulation, followed in 2021 by the Safer Online Services and Media Platforms review.

The latter proposed a sweeping new regulatory architecture, including a single national regulator overseeing platforms, content moderation systems and compliance obligations. Even at the discussion-paper stage, the proposals raised serious concerns: they were complex, invasive, and rested on contestable assumptions about harm, capability and enforcement.

That process ultimately stalled, and in 2024 the review was formally terminated by the Minister of Internal Affairs. But the regulatory impulse did not disappear. Advocacy groups soon shifted the focus to children and social media. A campaign to restrict under-16 access gained momentum, culminating in a private member’s bill, political endorsement at the highest level, and a parliamentary inquiry into “the harm young New Zealanders encounter online”.

An interim report released in December 2025 pointed clearly toward what might be described as “Safer Online Services Lite”: recommendations including restrictions on under-16 access and the creation of a national online safety regulator. Ministers have since confirmed that Cabinet decisions have already been taken and announcements are imminent.

New Zealand is not alone in this turn. Australia, France and Spain have endorsed teenage social media bans or restrictions. The United Kingdom’s Online Safety Act goes further still, imposing obligations so expansive that police have reportedly visited private homes over allegedly unlawful online posts.

Advocates here argue that New Zealand can “piggyback” on overseas reforms while adapting them to local conditions. Others suggest simply updating the HDCA to deal with emerging harms, supplemented by culturally tailored support mechanisms.

Meanwhile, existing regulators are also pushing at the boundaries of their jurisdiction. The Broadcasting Standards Authority has asserted oversight of online content that resembles broadcasting, with legal advice suggesting that exemptions for on-demand services should be narrowly read. A current case involving The Platform and Reality Check Radio may significantly expand the reach of an already powerful regulator.

All of this risks obscuring the real issue. The debate is not ultimately about whether teenagers should have restricted access to social media, or whether online harm exists. It is about which regulatory model New Zealand is prepared to adopt – and what that choice means for speech, power and accountability.

At a high level, three models are available.

The first is prior restraint. This approach prevents content from being published at all, through upload filters, pre-publication moderation or legal blocking mechanisms. Elements of this model appear in the UK’s Online Safety framework, where platforms are required to prevent certain categories of content from ever appearing online. Copyright filters on platforms like YouTube operate in a similar way.

The problem is well known: prior restraint suppresses speech before illegality or harm is established, often through automated systems that over-block lawful expression. The chilling effects are real and measurable, leading users to self-censor and narrowing the range of public debate.

The second is a code-compliance or hybrid restraint model. Here, regulators mandate or approve codes of practice that platforms must follow. The Broadcasting Standards Authority and New Zealand Media Council already operate this way.

The abandoned Safer Online Services proposals would have extended this model to digital platforms, with compulsory compliance and penalties for failure.

While this approach avoids direct censorship of individual posts, it embeds value judgments into technical systems that are opaque, rigid and difficult to contest.

Over time, such systems tend to lock in assumptions, blur the line between private moderation and state enforcement, and create an illusion of robust governance without genuine transparency.

The third model is reactive and harm-based. This is the approach New Zealand has largely taken so far. Content is regulated after publication, in response to complaints or demonstrable harm. The HDCA sits squarely within this model, as do notice-and-takedown regimes in the EU and post-event moderation practices adopted by platforms after Christchurch.

The strength of this model lies in its restraint. It treats prior restraint as exceptional, ties intervention to clearly defined harms, and preserves space for lawful but controversial speech. It also keeps core determinations of illegality within public institutions rather than private algorithms.

Each model carries trade-offs. But the choice matters. Once a regulatory architecture is built – especially one embedded in code – it is difficult to dismantle. As New Zealand edges toward new online safety measures, it should be clear-eyed about what it is adopting.

The real danger is not under-16s on social media. It is drifting, almost by default, into a system of pervasive prior restraint without ever having an honest debate about whether that is the internet – or the democracy – we want.

Catch up on the debates that dominated the week by signing up to our Opinion newsletter – a weekly round-up of our best commentary.

  • Tags:
  • choosing
  • david
  • facts
  • harvey
  • Internet
  • is
  • model
  • new
  • New Zealand
  • News
  • NewZealand
  • NZ
  • really
  • Regulation
  • what
  • zealand
New Zealand News Beep
www.newsbeep.com