The post OpenAI warns its next-gen AI models could become hacker tools appeared on BitcoinEthereumNews.com. The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats. OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage. Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025. OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure. The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources. Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together. The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education. Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.… The post OpenAI warns its next-gen AI models could become hacker tools appeared on BitcoinEthereumNews.com. The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats. OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage. Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025. OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure. The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources. Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together. The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education. Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.…

OpenAI warns its next-gen AI models could become hacker tools

2025/12/11 18:44

The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats.

OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage.

Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025.

OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure.

The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources.

Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together.

The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education.

Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.

Testing the limits

OpenAI works with specialized security testing groups that try breaking through all its defenses. They simulate how a determined attacker with serious resources might operate. This helps find weak spots before real threats do.

The cybersecurity risks from AI worry people across the industry. As reported by Cryptopolitan previously, hackers already use AI technologies to improve their attacks.

The firm plans a program that gives qualified users working on cybersecurity defense special access to enhanced capabilities in its newest models. OpenAI is still working out which features can be widely available and which need tighter restrictions.

Then there’s Aardvark. This security tool in private testing helps developers and security teams find and fix vulnerabilities at scale. It scans code for weaknesses and suggests fixes. The system already discovered new vulnerabilities in open-source software. OpenAI plans to put significant resources into strengthening the broader security ecosystem. That includes offering free coverage to some non-commercial open source projects.

OpenAI will create the Frontier Risk Council. This brings together experienced cybersecurity defenders and practitioners. The group starts with cybersecurity but will expand to other areas. Council members help determine boundaries between useful capabilities and potential misuse.

Security remains a challenge

The company works with other leading AI companies through the Frontier Model Forum. This nonprofit develops shared understanding of threats and best practices. OpenAI thinks security risks from advanced AI could come from any major AI system in the industry.

Recent research showed AI agents can discover zero-day vulnerabilities worth millions in blockchain smart contracts. This highlights how these advancing capabilities cut both ways.

OpenAI has worked to strengthen its own security measures, but the company faced its own problems. The firm dealt with multiple security breaches in the past. This shows how hard it is to protect AI systems and infrastructure.

The company says this is ongoing work. The goal is giving defenders advantages and strengthening security of critical infrastructure across the technology ecosystem.

Join a premium crypto trading community free for 30 days – normally $100/mo.

Source: https://www.cryptopolitan.com/openai-warns-next-gen-ai-models-hacker-tools/

Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

SEC issues investor guide on crypto wallets and custody risks

SEC issues investor guide on crypto wallets and custody risks

The SEC released a guide on crypto wallets and custody for investors.
Paylaş
Cryptopolitan2025/12/14 08:38
UK Looks to US to Adopt More Crypto-Friendly Approach

UK Looks to US to Adopt More Crypto-Friendly Approach

The post UK Looks to US to Adopt More Crypto-Friendly Approach appeared on BitcoinEthereumNews.com. The UK and US are reportedly preparing to deepen cooperation on digital assets, with Britain looking to copy the Trump administration’s crypto-friendly stance in a bid to boost innovation.  UK Chancellor Rachel Reeves and US Treasury Secretary Scott Bessent discussed on Tuesday how the two nations could strengthen their coordination on crypto, the Financial Times reported on Tuesday, citing people familiar with the matter.  The discussions also involved representatives from crypto companies, including Coinbase, Circle Internet Group and Ripple, with executives from the Bank of America, Barclays and Citi also attending, according to the report. The agreement was made “last-minute” after crypto advocacy groups urged the UK government on Thursday to adopt a more open stance toward the industry, claiming its cautious approach to the sector has left the country lagging in innovation and policy.  Source: Rachel Reeves Deal to include stablecoins, look to unlock adoption Any deal between the countries is likely to include stablecoins, the Financial Times reported, an area of crypto that US President Donald Trump made a policy priority and in which his family has significant business interests. The Financial Times reported on Monday that UK crypto advocacy groups also slammed the Bank of England’s proposal to limit individual stablecoin holdings to between 10,000 British pounds ($13,650) and 20,000 pounds ($27,300), claiming it would be difficult and expensive to implement. UK banks appear to have slowed adoption too, with around 40% of 2,000 recently surveyed crypto investors saying that their banks had either blocked or delayed a payment to a crypto provider.  Many of these actions have been linked to concerns over volatility, fraud and scams. The UK has made some progress on crypto regulation recently, proposing a framework in May that would see crypto exchanges, dealers, and agents treated similarly to traditional finance firms, with…
Paylaş
BitcoinEthereumNews2025/09/18 02:21