OpenAI Brings Age Prediction To ChatGPT Consumer Accounts
OpenAI has started rolling out an age prediction system on ChatGPT consumer plans as it tries to better identify under-18 users and automatically apply stronger safety protections amid rising regulatory pressure and concern about AI’s impact on young people.
Why OpenAI Is Introducing Age Prediction Now
On 20 January 2026, OpenAI confirmed it had begun deploying age prediction across ChatGPT consumer accounts, marking a significant change in how the platform determines whether users are likely to be minors. The move builds on work the company first outlined in September 2025, when it publicly acknowledged that existing age-declaration systems were insufficient on their own.
Several factors have converged to make this rollout unavoidable. For example, regulators in the UK, EU, and US have been tightening expectations around child safety online, with a growing emphasis on proactive risk mitigation rather than self-reported age alone. In the UK, the Online Safety Act places explicit duties on platforms to prevent children from encountering harmful content, while in the EU, the Digital Services Act and related guidance are pushing platforms towards more robust age assurance mechanisms. OpenAI has also confirmed that age prediction will roll out in the EU “in the coming weeks” to reflect regional legal requirements.
Reputational pressure has been another driver. Over the past two years, OpenAI and other AI providers have faced criticism for how conversational AI interacts with teenagers, including high-profile reporting on inappropriate content exposure and edge-case safety failures. OpenAI itself has acknowledged these concerns, stating that “young people deserve technology that both expands opportunity and protects their well-being.”
At the same time, OpenAI argues that improving age detection allows it to loosen unnecessary restrictions on adults. As the company puts it, more reliable age signals “enable us to treat adults like adults and use our tools in the way that they want, within the bounds of safety,” rather than applying broad safety constraints to everyone by default.
How Age Prediction Works in Practice
Rather than relying on a single data point, OpenAI’s system uses an age prediction model designed to estimate whether an account likely belongs to someone under 18. According to the company, the model analyses a combination of behavioural and account-level signals over time.
These signals include how long an account has existed, typical times of day when it is active, usage patterns across sessions, and the age a user has stated in their account settings. None of these factors alone is treated as definitive. Instead, the model weighs them together to make a probabilistic judgement about whether an account is more likely to belong to a minor.
What Happens If The System Can’t Really Tell?
OpenAI has been clear that any uncertainty by its model about a person’s age results in it erring on the side of caution. For example, when the system is not confident about a user’s age, or when available information is incomplete, it defaults to a safer under-18 experience. The company says this approach reflects established research into adolescent development, including differences in impulse control, risk perception, and susceptibility to peer influence.
The rollout is also being used as a live learning exercise. For example, OpenAI has said that deploying age prediction at scale helps it understand which signals are most reliable, allowing the model to be refined over time as patterns become clearer.
What If It Makes A Mistake?
Recognising that automated systems can make mistakes, OpenAI says it has built in a reversal mechanism for adults who are incorrectly classified as under 18. Users can confirm their age through a selfie-based check using Persona, a third-party identity verification service already used by many online platforms.
The process is designed to be quick and optional. Users can check whether additional safeguards have been applied to their account and initiate age confirmation at any time via Settings > Account. If verification is successful, full adult access is restored.
OpenAI describes Persona as a secure service and positions this step as a safeguard against long-term misclassification, rather than a requirement for general ChatGPT use.
What Protections Are Automatically Applied?
When an account is identified as likely belonging to someone under 18, ChatGPT essentially applies a stricter set of content rules, which go beyond the baseline safety filters already in place for all users.
For example, according to OpenAI, the under-18 experience is designed to reduce exposure to specific categories of sensitive content, including graphic violence or gory material, sexual, romantic, or violent role play, depictions of self-harm, and viral challenges that could encourage risky behaviour. Content promoting extreme beauty standards, unhealthy dieting, or body shaming is also restricted.
These measures build on existing teen protections applied to users who self-declare as under 18 at sign-up. The key difference is that age prediction allows these safeguards to be applied even when a user has not disclosed their age accurately.
Guided By Expert Input
OpenAI has been keen to stress that these restrictions are guided by expert input and academic literature on child development, rather than its own ad-hoc policy decisions. The company has also highlighted parental controls as a complementary layer, allowing parents to set quiet hours, disable features such as memory or model training, and receive notifications if signs of acute distress are detected.
Limitations and Trade-Offs
Despite its ambitions, OpenAI has been quite candid about the limits of age prediction. Accurately inferring age from behavioural signals is inherently difficult, particularly when adult and teenage usage patterns can overlap, and false positives remain a risk, especially for adults with irregular usage habits or newer accounts.
Privacy concerns are another potential flashpoint here. For example, while OpenAI says it relies on account-level and behavioural data already generated through normal use, critics argue that increased behavioural inference raises questions about transparency and proportionality. Even when data is not new, the way it is interpreted can feel intrusive to users.
The requirement to submit a selfie for age correction also introduces friction. Although optional, it effectively asks some adults to undergo identity verification to regain full access, a trade-off that may not sit comfortably with all users.
OpenAI has framed these compromises as necessary. For example, in a blog post back in September 2025, the company stated that “when some of our principles are in conflict, we prioritise teen safety ahead of privacy and freedom,” while committing to explain its reasoning publicly.
The Wider Debate on Age Assurance and Platform Responsibility
OpenAI’s move is happening in the middle of (and in response to) an ongoing debate about age assurance across the internet. Governments increasingly expect platforms to move beyond self-declared ages, yet there is no consensus on a perfect technical solution that balances accuracy, privacy, and usability.
In the UK, regulators have signalled that probabilistic age estimation may be acceptable when deployed responsibly and proportionately. In the EU, scrutiny is even sharper, with data protection authorities closely watching how behavioural inference models align with GDPR principles.
Somewhere In The Middle
It seems that OpenAI’s approach sits somewhere between hard identity checks and minimal self-reporting. It avoids mandatory ID verification for all users, while still asserting that platforms have a duty to intervene when there is a reasonable likelihood that a user is a child.
Critics argue that this shifts too much responsibility onto automated systems that remain opaque to users. Supporters counter that doing nothing is no longer viable given the scale and influence of generative AI tools.
What is clear is that age prediction on ChatGPT is unlikely to be the final word. For example, OpenAI has said it will “closely track rollout and use those signals to guide ongoing improvements,” while continuing dialogue with organisations such as the American Psychological Association, ConnectSafely, and the Global Physicians Network. The company has positioned this release as an important milestone rather than a finished solution, signalling that age assurance will remain an evolving part of how AI platforms are expected to operate.
Are Other AI Platforms Taking a Similar Approach?
OpenAI’s move towards age prediction appears to be part of a wider industry trend rather than an isolated decision. In fact, several major AI and consumer technology platforms are now experimenting with ways to identify younger users more reliably and adapt product experiences accordingly, although the technical and policy approaches differ.
Meta has taken one of the closest parallel paths. In January, the company confirmed it had paused teenagers’ access to its AI-powered characters across Instagram and other platforms while it redesigns its under-18 experience. Meta has said it uses a mix of declared age and its own age estimation technology to identify teen users, applying stricter safeguards and parental controls where appropriate. While Meta’s AI features differ from ChatGPT in purpose and scope, the underlying logic is similar: if a system believes a user may be under 18, additional protections are applied by default rather than relying solely on self-reported age.
Anthropic has adopted a more restrictive position. Its Claude AI assistant is marketed as an 18-plus product, with users required to confirm they meet the minimum age during account creation. Anthropic has stated that accounts identified as belonging to minors may be disabled, including where app store data suggests a user is under 18. This approach avoids probabilistic age prediction across behavioural signals, instead enforcing a clear age threshold with limited flexibility.
Microsoft’s Copilot appears to be following a more traditional tiered-access model. For example, Microsoft allows use by people aged 13 to 18 in many regions, subject to parental controls and account supervision, while reserving full functionality for adult accounts. Age is primarily determined through Microsoft account information rather than inferred behaviour, reflecting a model already familiar from Xbox and other Microsoft services.
Google’s Gemini apps seem to rely heavily on supervised accounts for younger users. Access for children under 13 must be enabled by a parent through Google’s Family Link system, which allows ongoing control over features and usage. While this does not involve behavioural age prediction, it still treats age as a core safety signal that shapes how the AI can be used.
Among more open-ended chatbot platforms, Character.AI has moved quickly towards an age-aware model. In late 2025, the company announced restrictions on under-18 users’ access to open-ended chat, alongside the development of a separate teen experience. Character.AI has also introduced an age assurance process that allows users to verify their age via a selfie check when the system believes an account may belong to a minor, closely mirroring OpenAI’s use of Persona for age confirmation.
Taken together, these approaches suggest a broad industry acceptance that self-declared age alone is no longer seen as sufficient. Platforms are experimenting with a spectrum of solutions, ranging from hard age limits through to probabilistic inference and supervised accounts, as they respond to mounting regulatory expectations and public scrutiny around child safety.
What Does This Mean For Your Business?
OpenAI’s rollout of age prediction shows an acknowledgement that general purpose AI tools are now expected to take a more active role in protecting younger users, rather than relying on self-declared age and broad safety rules. The company has positioned this as a pragmatic response to regulatory pressure, public concern, and its own experience of where existing safeguards fall short. Also, it could be seen as an explicit acceptance that there is no clean or perfect solution, only trade-offs between safety, privacy, and usability that platforms now have to make openly.
For UK businesses, this change is not just a consumer safety issue. For example, many organisations already rely on ChatGPT for research, drafting, customer support, and internal productivity, and age-based restrictions could affect how accounts behave in practice, particularly where shared logins, training environments, or younger staff are involved. In fact, more broadly, age assurance, behavioural inference, and defaulting to safer modes are becoming standard expectations for digital services, not edge cases. That has implications for compliance planning, data governance, and how businesses assess the risk profile of the tools they embed into day-to-day operations.
For regulators, parents, educators, and AI providers alike, OpenAI’s approach highlights a general move toward platform responsibility. Age prediction is being treated less as a single technical feature and more as an ongoing governance challenge that will need constant adjustment, oversight, and explanation. The outcome of this rollout will likely influence how future online safety rules are enforced in practice, and how far probabilistic systems are trusted to make judgements about users at scale. What happens next will matter well beyond ChatGPT.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!