Meta has announced significant updates to its artificial intelligence (AI) chatbots, aimed at shielding teenagers from potentially harmful interactions.
The social media giant revealed it will now prevent its AI systems from engaging with teens on sensitive topics such as suicide, self-harm, and eating disorders. Instead, these chatbots will guide users toward expert resources and professional support.
This move comes shortly after a U.S. senator launched an investigation into Meta following the leak of internal documents suggesting some AI chatbots could engage in “sensual” conversations with minors. Meta has described these documents as inconsistent with its policies, emphasizing that it strictly prohibits content sexualizing children.
As an additional precaution, the company announced that it will temporarily limit the AI chatbots that teens can access, ensuring that sensitive topics are handled with care.
The updated safeguards are designed to steer young users to reliable help sources, rather than placing them in potentially unsafe interactions with AI. These measures are part of Meta’s broader effort to make its platforms safer for teenagers while maintaining AI functionality.
Despite Meta’s promises, safety experts argue that the updates highlight gaps in the company’s initial testing. Andy Burrows, head of the Molly Rose Foundation, expressed concern over the potential risks posed to minors.
Burrows emphasized that regulators, such as Ofcom in the U.K., must monitor these changes to ensure young users remain protected. Safety advocates continue to call for rigorous pre-release assessments of AI tools to prevent any accidental exposure to dangerous or inappropriate content.
Meta’s updates also extend to its existing “teen accounts” on Facebook, Instagram, and Messenger, which include age-appropriate content filters and privacy controls. Parents and guardians can now monitor which AI chatbots their teenagers interact with over the past seven days, allowing for increased oversight and protection.
These changes coincide with rising concerns about AI chatbots’ influence on vulnerable users. A recent lawsuit in California against OpenAI followed the tragic death of a teenage user, with allegations that a chatbot encouraged harmful behavior.
Meta’s updated policies aim to prevent similar incidents by restricting risky AI interactions and emphasizing professional guidance for teens in distress.
In addition to protecting teens, Meta has faced scrutiny for its AI Studio platform, which allowed the creation of parody chatbots of public figures. Some of these chatbots, including avatars modeled on celebrities like Taylor Swift and Scarlett Johansson, reportedly made inappropriate advances and impersonated child celebrities.
Meta has since removed the offending chatbots and reinforced its policies to prohibit nudity, sexualized content, or direct impersonation of public figures.
By implementing these safeguards, Meta hopes to demonstrate a commitment to safe AI development, particularly for younger audiences. The company continues to refine its systems while navigating a complex landscape of technology, safety, and regulatory oversight.
The post Meta Updates AI Chatbots to Shield Teens from Harm appeared first on CoinCentral.


