AI cybersecurity is now a formal competitive front between OpenAI and Anthropic, with OpenAI finalizing an advanced security product for a limited partner release and Anthropic running a tightly controlled effort called Project Glasswing aimed at finding critical software vulnerabilities before attackers do.
Artificial intelligence has moved from a tool that helps defenders understand threats to one that can independently find and exploit vulnerabilities. OpenAI and Anthropic are now building directly into that space, with implications for governments, enterprises, and the millions of software systems that underpin global financial infrastructure.
OpenAI is finalizing an AI cybersecurity product with advanced capabilities and plans to release it initially to a limited partner group, according to Tech Startups. Anthropic is running a parallel effort internally called Project Glasswing, a tightly controlled initiative designed to hunt down critical software vulnerabilities before malicious actors find them first.
The dual announcements mark a shift in how the two leading AI labs are positioning themselves. Both are moving from general-purpose AI into security-specific products with direct offensive and defensive capability. The question is no longer what AI can do in cybersecurity. It is who controls it and who is accountable when it goes wrong.
Anthropic has already demonstrated the scale of what AI security tools can achieve. As crypto.news reported, the company limited access to its Claude Mythos Preview model after early testing found it could uncover thousands of critical vulnerabilities across widely used software environments, including a 27-year-old bug in OpenBSD and a 16-year-old remote execution flaw in FreeBSD. Anthropic said: “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.”
Industry data cited by Anthropic shows a 72% year-on-year increase in AI-powered cyberattacks, with 87% of global organizations reporting exposure to AI-enabled incidents in 2025. Project Glasswing is being positioned as Anthropic’s controlled effort to stay ahead of that curve.
The deeper issue for regulators and the industry is that the same AI tool that finds a vulnerability defensively can find it offensively. As crypto.news noted, a joint study by Anthropic and MATS Fellows found that Claude Sonnet and GPT-5 could produce simulated exploits against Ethereum smart contracts worth $4.6 million in testing, and uncovered two novel zero-day vulnerabilities in nearly 3,000 recently deployed contracts.
That dual-use reality makes the controlled rollout strategies both companies are pursuing essential. But the question of whether limited access is enough to prevent proliferation is one neither lab has fully answered.


