OpenAI has begun rolling out an age prediction system to squarely protect teenagers using ChatGPT and reduce the risk of exposing young users to harmful material.
The company says the feature works in the background, scanning account-level and behavioural signals to judge whether an account is likely run by someone under 18.
When that threshold is crossed, content limits are switched on automatically. The idea, OpenAI argues, is to give teens a safer version of the product without forcing every user through upfront identity checks.
This is a response to complaints the company has received over how its tools affect children. OpenAI has been warned by regulators over past incidents involving young users, and this rollout reveals a transition from reactive fixes to a more systemic safeguard.
According to OpenAI, the age prediction system looks at factors such as how long an account has existed, typical usage times, long-term patterns, and the age a user claims when signing up.
No single signal decides the result. Instead, they are weighed together to reach a probability-based judgement.
When an account is flagged as under 18, ChatGPT applies restrictions around sensitive areas. These include graphic violence, depictions of self-harm, sexual or violent role play, risky online challenges, and content that promotes extreme body ideals or unhealthy dieting. If the system is unsure about someone’s age, it defaults to the safer setting.
OpenAI said in its announcement: “We’re rolling out age prediction on ChatGPT consumer plans to help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens.”
Adults who are wrongly placed into the under-18 experience are not locked out permanently. They can verify their age through Persona, a third-party identity service, by submitting a live selfie and, in some regions, a government-issued ID. OpenAI maintains that it does not receive copies of these documents, only confirmation of age.
The company is also leaning on parental controls, allowing guardians to set usage limits, restrict features such as memory, and receive alerts if signs of serious distress appear. These tools, OpenAI says, are optional but designed to give families more oversight.
The rollout is already live in many regions, with Europe scheduled to follow in the coming weeks due to regulatory requirements. OpenAI says it will monitor how the system performs and adjust it over time, refining which signals are most important and closing gaps where users try to bypass safeguards.


