The way we use the internet in Australia is changing. Soon, it won’t just be social media platforms asking to verify your age. Come December, age verification requirements will also extend to search engines – with significant ramifications.
That means you may need to scan your face or do an identity check to use a search engine as a logged-in user. And it’s unlikely to stop there: the eSafety commissioner is considering rules for mandatory age checks across the entire internet landscape.
Whether or not you support the idea of age-gating the internet, this is a huge, unprecedented change. These are not small decisions; they will impact everyone who uses the internet in Australia – not just people under 16. There are implications for privacy, digital inclusion, access to information and online participation that go beyond the controversial teen social media ban. All of this warrants meaningful public debate.
If this is the first time you’re hearing about it, you’re not alone. Despite the significance of the changes, these latest rules are the result of industry codes, which differs to regular legislation. These codes don’t go through parliament. Instead, they’re developed by the tech industry and registered by the eSafety commissioner in a process called co-regulation. On one hand, this can be good: it can allow for more flexibility or technology-specific detail that is less appropriate in legislation. On the other: it creates risk of industry co-option, and by bypassing parliamentary process, can give an enormous amount of power to an unelected official (in this case, the eSafety commissioner).
Greens senator David Shoebridge has called the implications of age verification for search engines “staggering” and noted that “these proposals don’t have to go through an elected parliament and we can’t vote them down no matter how significant concerns are. That combined with lack of public input is a serious issue.”
The age verification policy development process has been littered with blunders that make a mockery of meaningful consultation and evidence-based policy development. It is particularly striking that these codes were drafted before the completion of the government’s $6.5m trial into the efficacy of age assurance. Later, the trial’s preliminary findings conceded the technology is not guaranteed to be effective, and noted “concerning evidence” that some technology providers were seeking to collect too much personal information.
While a government-commissioned survey on the teen social media ban found overwhelming support in theory, it also found most people have no idea what that means in practice, with many uncomfortable with the methods it might entail – such as biometric face scanning or handing over your credit card details. And while there was much fanfare around the social media ban, it’s not clear there is a social licence to extend this approach to search engines and beyond. It seems many people may be unpleasantly surprised.
Importantly, it’s not just about verifying age, but what happens after that. Assuming a person’s age is accurately verified (this is not a given – research shows it’s fraught with problems), then comes the challenge of identifying and filtering out content. The intention is to limit young people’s access to pornography, high-impact violence and other inappropriate but not illegal material. It may seem like a simple task – you probably have your own gut sense of what should be filtered out. But automating content moderation at scale is a notoriously complex task both technologically (how do you avoid accidentally capturing too much or too little?) and politically (who gets to decide what is or is not appropriate?).
Some digital media scholars have called the idea of using tech to restrict online content by age “problematic”. In particular, they highlight how automated content moderation often incorrectly restricts sex education, sexual health information, harm reduction and health promotion. This also challenges digital inclusion. For some, digital identification can be a major barrier to online participation. Digital rights advocate and technologist Kathryn Gledhill-Tucker highlights that search engines are not an “optional luxury” but have “become basic services in a digital world”. Taken together, there are serious questions about the impact these codes will have on people’s right to access information – questions that ideally would be addressed through public scrutiny.
None of this is to say the Australian government should do nothing. Nor is it to defend the behaviour of tech giants. Governments can and should intervene to challenge their power and force them to clean up the swamp of online harms.
But age verification isn’t the only option on the table. Academics, advocates and digital policy experts have suggested a range of other approaches to enhance online safety. Gledhill-Tucker notes a “profound disregard for human rights advocates, who have called for meaningful legislation to temper the power of large technology companies for years”. For example, Australia could move away from the current “content-first” approach, which becomes a game of whack-a-mole of removal and restriction, towards a “systems-first” paradigm, which prioritises challenging the underlying business models to create systemic change.
Australians are going to have differing opinions about how best to minimise harms on the internet, but we should at least all have the opportunity to participate in meaningful public debate about such significant changes to our online lives. In this case, it seems the horse has already bolted.