OpenAI released a warning on December 10 stating that its upcoming AI models could create serious cybersecurity risks. The company behind ChatGPT said these advanced models might build working zero-day remote exploits targeting well-defended systems.
The AI firm also noted these models could help with complex enterprise or industrial intrusion operations that lead to real-world consequences. OpenAI shared this information in a blog post addressing the growing capabilities of its technology.
The warning reflects concerns across the AI industry about potential misuse of increasingly powerful models. Several major tech companies have taken action to secure their AI systems against similar threats.
Google announced updates to Chrome browser security this week to block indirect prompt injection attacks on AI agents. The changes came before a wider rollout of Gemini agentic features in Chrome.
Anthropic revealed in November 2025 that threat actors, potentially linked to a Chinese state-sponsored group, had used its Claude Code tool for an AI-driven espionage operation. The company stopped the campaign before it caused damage.
OpenAI shared data showing rapid progress in AI cybersecurity abilities. The company’s GPT-5.1-Codex-Max model hit 76% on capture-the-flag challenges in November 2025.
This represents a major jump from the 27% score GPT-5 achieved in August 2024. Capture-the-flag challenges measure how well systems can locate and exploit security weaknesses.
The improvement over just a few months shows how fast AI models are gaining advanced cybersecurity capabilities. These skills can be used for both defensive and offensive purposes.
OpenAI said it is building stronger models for defensive cybersecurity work. The company is developing tools to help security teams audit code and fix vulnerabilities more easily.
The Microsoft-backed firm is using multiple security layers including access controls, infrastructure hardening, egress controls, and monitoring systems. OpenAI is training its AI models to reject harmful requests while staying useful for education and defense work.
The company is expanding monitoring across all products using frontier models to catch potentially malicious cyber activity. OpenAI is partnering with expert red teaming groups to test and improve its safety systems.
OpenAI introduced Aardvark, an AI agent that works as a security researcher. The tool is in private beta testing and can scan code for vulnerabilities and recommend patches.
Maintainers can quickly implement the fixes Aardvark proposes. OpenAI plans to offer Aardvark free to selected non-commercial open source code repositories.
The company will launch a program giving qualified cyberdefense users and customers tiered access to enhanced capabilities. OpenAI is forming the Frontier Risk Council, bringing external cyber defenders and security experts to work with its internal teams.
The council will start by focusing on cybersecurity before expanding to other frontier capability areas. OpenAI will soon provide details on the trusted access program for users and developers working on cyberdefense.
The post OpenAI Warns Next-Generation AI Models Pose High Cybersecurity Risks appeared first on Blockonomi.


