OpenAI has announced the launch of a new “age prediction” feature in ChatGPT designed to identify underage users. The measure is aimed at strengthening child protection amid growing concerns about the impact of AI on teenagers.
Recently, ChatGPT has repeatedly been the subject of criticism related to the safety of young users. In particular, OpenAI has been criticized for insufficient control over the topics that the chatbot discusses with teenagers, including sexualized content.
In April 2025, the company had to urgently fix a vulnerability that allowed it to generate erotic material for users under the age of 18.
The new system does not require direct age verification. Instead, it uses machine learning algorithms to analyze so-called “behavioral and user signals.” According to OpenAI, these signals include the user’s stated age, account lifetime, and typical activity time.
If the system concludes that an account is likely to belong to a minor, stricter filters are automatically applied to conversations. These filters restrict discussions of topics related to sex, violence, and other content that is potentially harmful or inappropriate for children and teenagers.
OpenAI emphasizes that the age prediction mechanism complements existing safeguards rather than replacing them. The company views this step as part of a broader strategy to reduce risks and bring ChatGPT into line with the expectations of regulators and society.
At the same time, a mechanism for correcting errors is provided.
If an adult user has been mistakenly classified as a minor, they will be able to restore full access to their account functionality. To achieve this, they will need to go through an identity verification process.
Verification will be carried out through OpenAI’s partner, Persona, a company specializing in age and identity verification. The user will need to send a selfie, after which the account status can be reviewed.
As a reminder, we wrote that OpenAI’s revenue in 2025 exceeded $20 billion.


