
On December 10 2025, Australia’s social media age-restriction law officially came into effect, restricting young people under the age of 16 from accessing 10 popular platforms: TikTok, Instagram, Facebook, Threads, X, Snapchat, Twitch, Kick, Reddit and YouTube.
The restrictions, which were passed in the Senate in November 2024, aim to protect young Australians from pressures and risks that users can be exposed to while logged in to social media accounts. These come from design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing.
While many have welcomed the ban, some experts caution that it remains to be seen how effective it will be and if other countries accept this law as a blueprint.
One of them is Professor Daswin De Silva, Professor of AI and Analytics and Director of AI Strategy at La Trobe University.
“The primary limitation is the absence of a regulating/governing body and the onus on social media companies to set up age-assurance methods to satisfy the law,” Professor De Silva said.
“Despite the government’s comparisons to seatbelt and alcohol laws, this absence of a regulating/governing body can make the law ineffective as social media companies could get away with the bare minimum of deactivating known underage accounts, and not doing much else.”
Professor De Silva pointed out that the European Union, Malaysia and several U.S states are closely following the Australian law.
“In the UK, the online safety act was updated in July 2025 to include child safety,” he said. “Without banning children, this law requires platforms to prevent children from accessing harmful and illegal content, another setting to watch closely.”
Protecting kids or exposing data?
In August 2025, the Federal Government released a ‘landmark study’ into age-assurance technologies. The report identified three types of methods: verification, estimation and inference. All three types require additional data collection from the consumer end, in the form of formal ID checks, biometrics, behavioural analytics, respectively.
Professor De Silva said it’s no secret social media companies rely on user data to fuel revenue through targeted ads.
“Having a law that requires such companies to collect more identifiable data which then becomes proprietary is certain to increase risks to privacy and data security,” he said.
“The ban is also referred to as a ‘delay’, as kids can reactivate their accounts after they turn 16, at which point it could be perfected targeted advertisements drawing on the high-quality data already shared to enforce the ban. Third-party data brokers and data breaches by cybercriminals are further risk factors of this ban.”
Can AI clean up the Internet better than bans?
With some AI tools improving in sophistication and accuracy, there is potential for this technology to play a valuable supporting role in identifying harmful or illegal content by rapidly scanning vast volumes of material and flagging potential risks.
While AI is not a replacement for trained human judgment, some believe this technology can potentially act as an early-warning system, helping experts prioritise cases, improve response times and strengthen overall online safety.
However, as Professor De Silva notes, AI still comes with serious caveats.
“Given the non-deterministic nature of AI, current evidence of AI hallucinations and undiscovered risks of AI such as psychosis and self-harm, it would be premature to expect AI-driven moderation to be more effective than the impactful human-led methods of content moderation, parental supervision and digital literacy,” he said.
“AI could serve as an adjunct to support human experts and operators to determine illegal and harmful content.”