The post OpenAI Publishes Child Safety Blueprint to Address AI-Enabled Exploitation appeared on BitcoinEthereumNews.com. In brief OpenAI published its “Child SafetyThe post OpenAI Publishes Child Safety Blueprint to Address AI-Enabled Exploitation appeared on BitcoinEthereumNews.com. In brief OpenAI published its “Child Safety

OpenAI Publishes Child Safety Blueprint to Address AI-Enabled Exploitation

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • OpenAI published its “Child Safety Blueprint” addressing AI-enabled child sexual exploitation.
  • The framework focuses on legal reforms, stronger reporting coordination, and guardrails built into AI systems.
  • The proposal was developed with input from child safety groups, attorneys general, and nonprofit organizations.

Aiming to address the rise of AI-enabled child sexual exploitation, OpenAI on Wednesday published a policy blueprint outlining new safety measures the industry can take to help curb the use of AI in creating child sexual abuse material.

In the framework, OpenAI lists legal, operational, and technical measures aimed at strengthening protections against AI-enabled abuse and improving coordination between technology companies and investigators.

“Child sexual exploitation is one of the most urgent challenges of the digital age,” the company wrote. “AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale.”

OpenAI said the proposal incorporates feedback from organizations working in child protection and online safety, including the National Center for Missing and Exploited Children and the Attorney General Alliance and its AI task force.

“Generative AI is accelerating the crime of online child sexual exploitation in deeply troubling ways-lowering barriers, increasing scale, and enabling new forms of harm,” President & CEO, National Center for Missing & Exploited Children, Michelle DeLaune said in a statement. “But at the same time, the National Center for Missing & Exploited Children is encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start.”

OpenAI said the framework combines legal standards, industry reporting systems, and technical safeguards within AI models. The company said these measures aim to help identify exploitation risks earlier and improve accountability across online platforms.

The blueprint identifies areas for action, including updating laws to address AI-generated or altered child sexual abuse material, improving how online providers report abuse signals and coordinate with investigators, and building safeguards into AI systems designed to prevent misuse.

“No single intervention can address this challenge alone,” the company wrote. “This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves.”

The blueprint comes as child safety advocates have raised concerns that generative AI systems capable of producing realistic images could be used to create manipulated or synthetic depictions of minors. In February, UNICEF called on world governments to pass laws criminalizing AI-generated child abuse material.

In January, the European Commission launched a formal investigation into whether X, formerly known as Twitter, violated EU digital rules by failing to prevent the platform’s native AI model, Grok, from generating illegal content, as regulators in the United Kingdom and Australia have also opened investigations.

Noting that laws alone will not stop the scourge of AI-generated abuse material, OpenAI said stronger industry standards will be necessary as AI systems become more capable.

“By interrupting exploitation attempts sooner, improving the quality of signals sent to law enforcement, and strengthening accountability across the ecosystem, this framework aims to prevent harm before it happens and help ensure faster protection for children when risks emerge,” OpenAI said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/363681/openai-child-safety-blueprint-ai-exploitation

Market Opportunity
Overtake Logo
Overtake Price(TAKE)
$0.01924
$0.01924$0.01924
-1.13%
USD
Overtake (TAKE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!