{"id":382032,"date":"2026-01-21T10:26:07","date_gmt":"2026-01-21T10:26:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/382032\/"},"modified":"2026-01-21T10:26:07","modified_gmt":"2026-01-21T10:26:07","slug":"openai-will-try-to-guess-your-age-before-chatgpt-gets-spicy-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/382032\/","title":{"rendered":"OpenAI will try to guess your age before ChatGPT gets spicy \u2022 The Register"},"content":{"rendered":"<p>OpenAI says it has begun deploying an age prediction model to determine whether ChatGPT users are old enough to view &#8220;sensitive or potentially harmful content.&#8221;<\/p>\n<p>Chatbots from OpenAI and its rivals are linked to a <a href=\"https:\/\/www.nytimes.com\/2025\/11\/06\/technology\/chatgpt-lawsuit-suicides-delusions.html\" rel=\"nofollow noopener\" target=\"_blank\">series<\/a> of <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html\" rel=\"nofollow noopener\" target=\"_blank\">suicides<\/a>, sparking litigation and <a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/examining-the-harm-of-ai-chatbots\" rel=\"nofollow noopener\" target=\"_blank\">a congressional hearing<\/a>. AI outfits therefore have excellent reasons to make the safety of their services more than a talking point, both for minors and the adult public.<\/p>\n<p>Hence we have OpenAI&#8217;s <a href=\"https:\/\/openai.com\/index\/introducing-the-teen-safety-blueprint\/\" rel=\"nofollow noopener\" target=\"_blank\">Teen Safety Blueprint<\/a>, introduced in November 2025, and its <a href=\"https:\/\/openai.com\/index\/updating-model-spec-with-teen-protections\/\" rel=\"nofollow noopener\" target=\"_blank\">Under-18 Principles for Model Behavior<\/a>, which debuted the following month.<\/p>\n<p>OpenAI is under pressure to turn a profit, knows its plan to <a href=\"https:\/\/www.theregister.com\/2026\/01\/17\/openai_chatgpt_ads\/\" rel=\"nofollow noopener\" target=\"_blank\">serve ads<\/a> needs to observe rules about marketing to minors, and has <a href=\"https:\/\/www.theregister.com\/2025\/10\/14\/openai_chatgpt_ai_erotica\/\" rel=\"nofollow noopener\" target=\"_blank\">erotica<\/a> in the ChatGPT pipeline. That all adds up to a need to partition its audience and prevent exposing them to damaging material.<\/p>\n<p>Part of OpenAI&#8217;s plan has been <a href=\"https:\/\/openai.com\/index\/building-towards-age-prediction\/\" rel=\"nofollow noopener\" target=\"_blank\">to develop an age prediction system<\/a> so that ChatGPT can automatically present an age-appropriate experience, at least among minors whose parents haven\u2019t steered them away from engaging with chatbots.<\/p>\n<p>Many young people interact with these models. During a September 16, 2025 Senate subcommittee hearing, &#8220;<a href=\"https:\/\/www.judiciary.senate.gov\/committee-activity\/hearings\/examining-the-harm-of-ai-chatbots\" rel=\"nofollow noopener\" target=\"_blank\">Examining the Harm of AI Chatbots<\/a>,&#8221; Mitch Prinstein, chief of psychology strategy and integration at The American Psychological Association, offered written testimony to the effect that over half of all US adolescents over the age of 13 now use generative AI. For those under 13, usage is estimated to be between 10 and 20 percent.<\/p>\n<p>Prinstein thinks that should not be the case. &#8220;AI systems designed for adults are fundamentally inappropriate for youth and require specific, developmentally informed safeguards,\u201d he said.<\/p>\n<p>OpenAI has therefore been working on an automated age prediction system, which the company described last September. &#8220;This isn&#8217;t easy to get right, and even the most advanced systems will sometimes struggle to predict age,\u201d the biz <a href=\"https:\/\/openai.com\/index\/building-towards-age-prediction\/\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> at the time.<\/p>\n<p>Age prediction or inference is distinct from age verification (checking government documents) and age estimation (using biometric signals like facial analysis). It relies on identifying facts about an individual and drawing a conclusion based on those facts. For OpenAI&#8217;s purposes, this may involve looking at the topics discussed during ChatGPT sessions and other factors associated with one&#8217;s account like common usage hours.<\/p>\n<p>On Tuesday, the company offered a <a href=\"https:\/\/openai.com\/index\/our-approach-to-age-prediction\/\" rel=\"nofollow noopener\" target=\"_blank\">progress report<\/a> in which it outlined how ChatGPT is using the company&#8217;s age prediction model to determine whether an account belongs to someone under the age of 18.<\/p>\n<p>&#8220;The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user&#8217;s stated age,&#8221; OpenAI explained, adding that the global rollout of the prediction-bot will reach the EU in a few weeks.<\/p>\n<p>When it detects users deemed to be under 18, OpenAI will activate additional safety settings. The company claims those settings will reduce the incidence of graphic violence or gory content, of viral challenges designed to elicit harmful behavior, of sexual, romantic, or violent role playing, of depictions of self-harm, and of content that promotes extreme beauty standards, unhealthy dieting, or body shaming.<\/p>\n<p>&#8220;No system is perfect,&#8221; OpenAI acknowledges in its <a href=\"https:\/\/help.openai.com\/en\/articles\/12652064-age-prediction-in-chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">help documentation<\/a>. &#8220;Sometimes we may get it wrong. If you are 18 or older and you were put into the under-18 experience by mistake, you can verify your age.&#8221;<\/p>\n<p>Doing so requires ChatGPT users to engage with Persona, a third party identity and age-checking company, either by sending a live selfie or uploading a photo of a government-issued ID. Those who don&#8217;t want to be subject to OpenAI&#8217;s age verification system may also choose to verify their age through Persona, which <a href=\"https:\/\/withpersona.com\/legal\/privacy-policy#notice-for-individuals-verifying-their-age\" rel=\"nofollow noopener\" target=\"_blank\">claims<\/a> it does not share or sell personal data collected for age assurance.<\/p>\n<p>OpenAI is following a path already trodden by tech companies in Australia, which have had to adopt age-checking tech to comply with rules that disallow social media usage <a href=\"https:\/\/www.theregister.com\/2025\/12\/09\/australian_social_media_ban\/\" rel=\"nofollow noopener\" target=\"_blank\">for those under 16<\/a>.<\/p>\n<p>Prior to the implementation of that law, Australia&#8217;s Age Assurance Technology Trial (AAT) came to a self-perpetuating <a href=\"https:\/\/www.infrastructure.gov.au\/sites\/default\/files\/documents\/aatt_part_a_digital.pdf\" rel=\"nofollow noopener\" target=\"_blank\">conclusion<\/a> [PDF] about age check tech: Age verification can be done, despite challenges, with an average accuracy of 97.05 percent, though less so &#8220;for older adults, non-Caucasian users and female-presenting individuals near policy thresholds.&#8221;<\/p>\n<p>When the Australia Broadcasting Corporation <a href=\"https:\/\/www.abc.net.au\/news\/2025-06-19\/teen-social-media-ban-technology-concerns\/105430458\" rel=\"nofollow noopener\" target=\"_blank\">reported<\/a> on the preliminary findings of the ATT in June last year, age verification systems guessed people&#8217;s ages within 18 months only 85 percent of the time.<\/p>\n<p>Advocacy organizations remain skeptical. Mozilla last month <a href=\"https:\/\/blog.mozilla.org\/netpolicy\/2025\/12\/19\/australias-social-media-ban-why-age-limits-wont-fix-what-is-wrong-with-online-platforms\/\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a>, &#8220;While many technologies exist to verify, estimate, or infer users&#8217; ages, fundamental tensions around effectiveness, accessibility, privacy, and security have not been resolved.&#8221;<\/p>\n<p>Alexis Hancock, director of engineering at the Electronic Frontier Foundation told The Register in an email, &#8220;We encourage the safety features promoted to be available to everyone using chat LLMs such as ChatGPT. However, OpenAI is taking the moment to further train an age prediction model, where a false prediction will fall on the user to give private information to further verify their age to another company.&#8221;<\/p>\n<p>Hancock said that factors like account age and usage patterns may be less reliable given that OpenAI has only been offering ChatGPT for four years. &#8220;However, the model itself is not obligated to be correct, nor can the decisions be challenged,&#8221; she said.<\/p>\n<p>The focus on the enforcement of age gating rather than accurate age verification, she said, is a developing pattern at other age checking systems.<\/p>\n<p>The Computer &amp; Communications Industry Association, which represents tech giants like Amazon, Apple, and Google, also isn&#8217;t thrilled with the possibility that age verification may become a requirement within app stores. The age checking tech, the group said last October, is &#8220;<a href=\"https:\/\/ccianet.org\/articles\/app-store-age-verification-popular-in-principle-unworkable-in-practice\/\" rel=\"nofollow noopener\" target=\"_blank\">unworkable in practice<\/a>.&#8221;<\/p>\n<p>But as long as ChatGPT can deliver sexy banter, and do so alongside ads, OpenAI needs to try to make age prediction tech work. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI says it has begun deploying an age prediction model to determine whether ChatGPT users are old enough&hellip;\n","protected":false},"author":2,"featured_media":382033,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-382032","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/382032","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=382032"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/382032\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/382033"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=382032"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=382032"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=382032"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}