TLDR OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits GPT-5.1-Codex-Max achievedTLDR OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits GPT-5.1-Codex-Max achieved

OpenAI Warns Next-Generation AI Models Pose High Cybersecurity Risks

2025/12/11 21:18

TLDR

  • OpenAI issued a warning that its next-generation AI models present “high” cybersecurity risks and could create zero-day exploits
  • GPT-5.1-Codex-Max achieved 76% on cybersecurity tests in November 2025, a sharp increase from GPT-5’s 27% in August 2024
  • The company is rolling out Aardvark, a security-focused AI agent that identifies code vulnerabilities and suggests fixes
  • OpenAI plans to create a Frontier Risk Council with cybersecurity experts and offer tiered access to enhanced security features
  • Google and Anthropic have also strengthened their AI systems against cybersecurity threats in recent months

OpenAI released a warning on December 10 stating that its upcoming AI models could create serious cybersecurity risks. The company behind ChatGPT said these advanced models might build working zero-day remote exploits targeting well-defended systems.

The AI firm also noted these models could help with complex enterprise or industrial intrusion operations that lead to real-world consequences. OpenAI shared this information in a blog post addressing the growing capabilities of its technology.

The warning reflects concerns across the AI industry about potential misuse of increasingly powerful models. Several major tech companies have taken action to secure their AI systems against similar threats.

Google announced updates to Chrome browser security this week to block indirect prompt injection attacks on AI agents. The changes came before a wider rollout of Gemini agentic features in Chrome.

Anthropic revealed in November 2025 that threat actors, potentially linked to a Chinese state-sponsored group, had used its Claude Code tool for an AI-driven espionage operation. The company stopped the campaign before it caused damage.

AI Cybersecurity Skills Advancing Quickly

OpenAI shared data showing rapid progress in AI cybersecurity abilities. The company’s GPT-5.1-Codex-Max model hit 76% on capture-the-flag challenges in November 2025.

This represents a major jump from the 27% score GPT-5 achieved in August 2024. Capture-the-flag challenges measure how well systems can locate and exploit security weaknesses.

The improvement over just a few months shows how fast AI models are gaining advanced cybersecurity capabilities. These skills can be used for both defensive and offensive purposes.

New Security Tools and Protection Measures

OpenAI said it is building stronger models for defensive cybersecurity work. The company is developing tools to help security teams audit code and fix vulnerabilities more easily.

The Microsoft-backed firm is using multiple security layers including access controls, infrastructure hardening, egress controls, and monitoring systems. OpenAI is training its AI models to reject harmful requests while staying useful for education and defense work.

The company is expanding monitoring across all products using frontier models to catch potentially malicious cyber activity. OpenAI is partnering with expert red teaming groups to test and improve its safety systems.

Aardvark Tool and Advisory Council

OpenAI introduced Aardvark, an AI agent that works as a security researcher. The tool is in private beta testing and can scan code for vulnerabilities and recommend patches.

Maintainers can quickly implement the fixes Aardvark proposes. OpenAI plans to offer Aardvark free to selected non-commercial open source code repositories.

The company will launch a program giving qualified cyberdefense users and customers tiered access to enhanced capabilities. OpenAI is forming the Frontier Risk Council, bringing external cyber defenders and security experts to work with its internal teams.

The council will start by focusing on cybersecurity before expanding to other frontier capability areas. OpenAI will soon provide details on the trusted access program for users and developers working on cyberdefense.

The post OpenAI Warns Next-Generation AI Models Pose High Cybersecurity Risks appeared first on Blockonomi.

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.