OpenAI has launched a new age prediction system for its ChatGPT service to determine user maturity through behavioural analysis. This technology arrives ahead of a planned rollout of restricted content, marking a significant shift in the platform’s safety architecture.

By assessing user interactions rather than relying on self-reported birthdates, the system aims to automate the categorisation of adult and minor accounts. This transition is a prerequisite for the introduction of an unrestricted mode intended for mature audiences later this year.

Age detection

OpenAI has begun a global rollout of an age prediction model designed to identify users under the age of 18. This system does not rely on traditional age-gate pop-ups or self-reported birthdates. Instead, it analyses account-level signals and behavioural data to estimate a user’s age.

The company confirmed that the technology is now being deployed across consumer plans, with a specific focus on protecting younger audiences from sensitive material.

This move marks a shift in how the platform manages user identity. The model evaluates several factors, including the length of time an account has been active, typical times of day when a user is online, and broader usage patterns over time. By looking at how a person interacts with the software, the algorithm attempts to classify them as either a minor or an adult. When the system identifies a user as likely being under 18, it automatically activates a set of enhanced safety settings.

Content restrictions

The activation of these safeguards leads to stricter filtering across a range of categories. Accounts flagged as belonging to minors will see limited exposure to depictions of graphic violence, self-harm, and extreme beauty standards. The system is also designed to block romantic or sexual role play and viral challenges that may encourage risky physical behaviour.

OpenAI stated that the implementation of these rules is rooted in research regarding adolescent psychology and risk perception. The goal is to provide a more restricted experience for teenagers while allowing for greater flexibility for older users.

If the prediction system is unable to determine an age with high confidence, the software defaults to the more restrictive safety settings. This cautious approach is intended to mitigate the risk of accidental exposure to harmful content.

Identity verification

Users who are incorrectly identified as minors by the algorithm have a path to restore standard access. OpenAI has partnered with Persona, a third-party identity verification service, to manage this process. To override the automated classification, users must provide a live selfie or a photograph of a government-issued identification document. This data is handled by Persona rather than OpenAI.

The verification process matches the user’s face against their provided ID to confirm their date of birth.

According to the service provider, these identification materials are typically deleted within seven days of the check. Once an adult is verified, the restrictive safety filters are removed from their account. Users may also choose to verify their age proactively to prevent the age prediction model from running on their account in the first place.

Adult mode

The development of this detection system is a prerequisite for a significant change in ChatGPT’s content policy. OpenAI executive Fidji Simo confirmed that the company intends to debut an adult mode in the first quarter of 2026.

This feature will allow verified adults to engage with content that was previously restricted, including more frank discussions of human sexuality, relationships, and medical topics.

This upcoming mode is expected to be an opt-in experience rather than a default setting. It is aimed at creative professionals and personal users who have complained that existing guardrails are too restrictive for complex writing or research.

By moving away from a universal content policy, the company aims to treat adults as mature users capable of managing their own interactions. However, the rollout of this mode is contingent on the age prediction system proving to be sufficiently accurate.

Safety concerns

The shift toward allowing mature content has drawn scrutiny from regulators and safety advocates. OpenAI is currently facing investigations by the Federal Trade Commission and several lawsuits regarding the impact of its technology on the mental health of younger users.

Critics argue that relying on behavioural signals for age estimation could lead to frequent misidentifications or privacy issues.

There are also concerns regarding the potential for users to bypass these new safeguards. While the age prediction model is designed to be more sophisticated than an honour-system birthdate check, experts suggest that sophisticated users may still find ways to mask their age-coded behaviour. Furthermore, the storage of government IDs by third-party vendors presents an ongoing data security risk, highlighted by previous breaches at other technology firms.

Regulatory compliance

The timing of these updates coincides with a global trend of stricter online age verification laws. Governments in several jurisdictions, including the European Union and Australia, are drafting or enforcing rules that require digital platforms to verify the age of their users more rigorously. OpenAI plans to expand its age prediction system to the European Union in the coming weeks to meet these regional requirements.

By implementing these tools now, the company is positioning itself to comply with emerging legislation while simultaneously creating a new revenue opportunity.

A less restricted model could drive higher engagement among premium subscribers who seek an uncensored experience. The success of this strategy will depend on whether the company can balance its commercial ambitions with its legal and ethical obligations to protect minors from adult-oriented content.