Meta Platforms, the parent company of Instagram, is taking new steps to safeguard teenagers as artificial intelligence becomes more integrated into social apps.
On Friday, the company announced that it will introduce advanced parental controls allowing parents to manage and monitor their teens’ interactions with AI-powered chat features.
These new tools will enable parents to turn off AI chats entirely, block specific AI characters, and gain insight into conversation topics between their teens and the company’s AI assistants. Meta said the update will begin rolling out early next year for English-speaking users in the United States, the United Kingdom, Canada, and Australia.
The company emphasized that these controls are designed to promote responsible use of AI and strengthen family trust, particularly as digital assistants become more lifelike and conversational.
The announcement comes just months after the U.S. Federal Trade Commission (FTC) launched an inquiry into how major technology companies, including Meta, handle AI interactions with minors.
The agency said it is investigating the potential psychological and emotional impact of AI “companions” and whether companies have done enough to evaluate safety before releasing them to young users.
Reuters previously reported that Meta’s chatbots had engaged in romantic-style conversations with children, including one case involving an eight-year-old. Following the report, Meta made sweeping changes to prevent its AI systems from discussing sensitive topics like self-harm, suicide, or eating disorders, and to restrict romantic or suggestive dialogue.
The new parental controls expand on those measures, giving parents a more active role in shaping how teens engage with AI.
Meta has built a system modeled after PG-13-style content filters, ensuring that AI characters only respond in age-appropriate ways. The company noted that its AI systems will now refuse to provide responses that “would feel out of place in a PG-13 movie.”
Parents will soon be able to view summaries of their teen’s AI conversations, allowing them to better understand what topics are being discussed without violating privacy boundaries. They can also set time limits for app usage and restrict access to specific AI assistants altogether.
Currently, Meta limits the number of AI characters that teens can chat with and prevents them from engaging with adult-themed personalities. These measures are part of a broader company initiative to make its ecosystem which includes Facebook, Instagram, and WhatsApp safer for younger audiences.
Meta isn’t alone in facing pressure to enhance AI safety for minors. OpenAI, also under FTC scrutiny, introduced its own parental controls in recent weeks and launched a council of experts to study how AI interactions affect user behavior, emotional health, and motivation.
For Meta, these updates mark another step in balancing innovation with social responsibility. The company’s rapid rollout of AI features, including chatbot companions and creative assistants, has drawn both excitement and criticism, especially as regulators question how well companies can protect young users in a rapidly evolving AI landscape.
Meta says it plans to expand these protections as AI technology advances, promising to update filters, age-verification systems, and monitoring tools in response to new risks.
The post Meta Rolls Out Teen-Safe AI Chat Features Across Instagram in US, UK, Canada, and Australia appeared first on CoinCentral.


