OpenAI says it has begun deploying an age prediction model to determine whether ChatGPT users are old enough to view “sensitive or potentially harmful content.”

Chatbots from OpenAI and its rivals are linked to a series of suicides, sparking litigation and a congressional hearing. AI outfits therefore have excellent reasons to make the safety of their services more than a talking point, both for minors and the adult public.

Hence we have OpenAI’s Teen Safety Blueprint, introduced in November 2025, and its Under-18 Principles for Model Behavior, which debuted the following month.

OpenAI is under pressure to turn a profit, knows its plan to serve ads needs to observe rules about marketing to minors, and has erotica in the ChatGPT pipeline. That all adds up to a need to partition its audience and prevent exposing them to damaging material.

Part of OpenAI’s plan has been to develop an age prediction system so that ChatGPT can automatically present an age-appropriate experience, at least among minors whose parents haven’t steered them away from engaging with chatbots.

Many young people interact with these models. During a September 16, 2025 Senate subcommittee hearing, “Examining the Harm of AI Chatbots,” Mitch Prinstein, chief of psychology strategy and integration at The American Psychological Association, offered written testimony to the effect that over half of all US adolescents over the age of 13 now use generative AI. For those under 13, usage is estimated to be between 10 and 20 percent.

Prinstein thinks that should not be the case. “AI systems designed for adults are fundamentally inappropriate for youth and require specific, developmentally informed safeguards,” he said.

OpenAI has therefore been working on an automated age prediction system, which the company described last September. “This isn’t easy to get right, and even the most advanced systems will sometimes struggle to predict age,” the biz said at the time.

Age prediction or inference is distinct from age verification (checking government documents) and age estimation (using biometric signals like facial analysis). It relies on identifying facts about an individual and drawing a conclusion based on those facts. For OpenAI’s purposes, this may involve looking at the topics discussed during ChatGPT sessions and other factors associated with one’s account like common usage hours.

On Tuesday, the company offered a progress report in which it outlined how ChatGPT is using the company’s age prediction model to determine whether an account belongs to someone under the age of 18.

“The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age,” OpenAI explained, adding that the global rollout of the prediction-bot will reach the EU in a few weeks.

When it detects users deemed to be under 18, OpenAI will activate additional safety settings. The company claims those settings will reduce the incidence of graphic violence or gory content, of viral challenges designed to elicit harmful behavior, of sexual, romantic, or violent role playing, of depictions of self-harm, and of content that promotes extreme beauty standards, unhealthy dieting, or body shaming.

“No system is perfect,” OpenAI acknowledges in its help documentation. “Sometimes we may get it wrong. If you are 18 or older and you were put into the under-18 experience by mistake, you can verify your age.”

Doing so requires ChatGPT users to engage with Persona, a third party identity and age-checking company, either by sending a live selfie or uploading a photo of a government-issued ID. Those who don’t want to be subject to OpenAI’s age verification system may also choose to verify their age through Persona, which claims it does not share or sell personal data collected for age assurance.

OpenAI is following a path already trodden by tech companies in Australia, which have had to adopt age-checking tech to comply with rules that disallow social media usage for those under 16.

Prior to the implementation of that law, Australia’s Age Assurance Technology Trial (AAT) came to a self-perpetuating conclusion [PDF] about age check tech: Age verification can be done, despite challenges, with an average accuracy of 97.05 percent, though less so “for older adults, non-Caucasian users and female-presenting individuals near policy thresholds.”

When the Australia Broadcasting Corporation reported on the preliminary findings of the ATT in June last year, age verification systems guessed people’s ages within 18 months only 85 percent of the time.

Advocacy organizations remain skeptical. Mozilla last month said, “While many technologies exist to verify, estimate, or infer users’ ages, fundamental tensions around effectiveness, accessibility, privacy, and security have not been resolved.”

Alexis Hancock, director of engineering at the Electronic Frontier Foundation told The Register in an email, “We encourage the safety features promoted to be available to everyone using chat LLMs such as ChatGPT. However, OpenAI is taking the moment to further train an age prediction model, where a false prediction will fall on the user to give private information to further verify their age to another company.”

Hancock said that factors like account age and usage patterns may be less reliable given that OpenAI has only been offering ChatGPT for four years. “However, the model itself is not obligated to be correct, nor can the decisions be challenged,” she said.

The focus on the enforcement of age gating rather than accurate age verification, she said, is a developing pattern at other age checking systems.

The Computer & Communications Industry Association, which represents tech giants like Amazon, Apple, and Google, also isn’t thrilled with the possibility that age verification may become a requirement within app stores. The age checking tech, the group said last October, is “unworkable in practice.”

But as long as ChatGPT can deliver sexy banter, and do so alongside ads, OpenAI needs to try to make age prediction tech work. ®