The post OpenAI warns its next-gen AI models could become hacker tools appeared on BitcoinEthereumNews.com. The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats. OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage. Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025. OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure. The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources. Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together. The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education. Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.… The post OpenAI warns its next-gen AI models could become hacker tools appeared on BitcoinEthereumNews.com. The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats. OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage. Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025. OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure. The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources. Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together. The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education. Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.…

OpenAI warns its next-gen AI models could become hacker tools

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

The company behind ChatGPT issued a stark warning Wednesday about potential dangers lurking in its next wave of artificial intelligence systems, saying they could present serious cybersecurity threats.

OpenAI stated its future AI models might be capable of creating functional zero-day exploits targeting heavily protected computer systems. The technology could also help carry out sophisticated attacks on businesses or industrial facilities designed to cause real-world damage.

Things are moving quickly. In its blog OpenAI mentioned, performance on capture-the-flag security challenges jumped from 27% on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max just three months later in November 2025.

OpenAI now assumes each new model it builds could hit what it calls “high” levels of cybersecurity capability. That means systems that can create working exploits for previously unknown vulnerabilities in well-protected networks, or help with complicated intrusion campaigns targeting critical infrastructure.

The Microsoft-backed firm said it’s investing in making its models better at defensive security work. The company is building tools to help security teams check code for problems and fix security holes. OpenAI wants to give defenders an edge since they’re usually outnumbered and short on resources.

Here’s where it gets tricky. Defensive and offensive cybersecurity work use the same basic knowledge and methods. What helps defenders could just as easily help attackers. OpenAI says it can’t rely on one protective measure. It needs layers of security controls working together.

The company is using access restrictions, stronger infrastructure security, controls on information flow, and constant monitoring. It’s also training models to refuse requests that could enable cyber attacks while keeping them useful for legitimate security work and education.

Detection systems watch for suspicious activity across products using advanced models. When something looks dangerous, the system blocks results, switches to a weaker model, or flags it for human review.

Testing the limits

OpenAI works with specialized security testing groups that try breaking through all its defenses. They simulate how a determined attacker with serious resources might operate. This helps find weak spots before real threats do.

The cybersecurity risks from AI worry people across the industry. As reported by Cryptopolitan previously, hackers already use AI technologies to improve their attacks.

The firm plans a program that gives qualified users working on cybersecurity defense special access to enhanced capabilities in its newest models. OpenAI is still working out which features can be widely available and which need tighter restrictions.

Then there’s Aardvark. This security tool in private testing helps developers and security teams find and fix vulnerabilities at scale. It scans code for weaknesses and suggests fixes. The system already discovered new vulnerabilities in open-source software. OpenAI plans to put significant resources into strengthening the broader security ecosystem. That includes offering free coverage to some non-commercial open source projects.

OpenAI will create the Frontier Risk Council. This brings together experienced cybersecurity defenders and practitioners. The group starts with cybersecurity but will expand to other areas. Council members help determine boundaries between useful capabilities and potential misuse.

Security remains a challenge

The company works with other leading AI companies through the Frontier Model Forum. This nonprofit develops shared understanding of threats and best practices. OpenAI thinks security risks from advanced AI could come from any major AI system in the industry.

Recent research showed AI agents can discover zero-day vulnerabilities worth millions in blockchain smart contracts. This highlights how these advancing capabilities cut both ways.

OpenAI has worked to strengthen its own security measures, but the company faced its own problems. The firm dealt with multiple security breaches in the past. This shows how hard it is to protect AI systems and infrastructure.

The company says this is ongoing work. The goal is giving defenders advantages and strengthening security of critical infrastructure across the technology ecosystem.

Join a premium crypto trading community free for 30 days – normally $100/mo.

Source: https://www.cryptopolitan.com/openai-warns-next-gen-ai-models-hacker-tools/

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Three AI Models Just Predicted A Shocking XRP Price For 2026

Three AI Models Just Predicted A Shocking XRP Price For 2026

Crypto markets thrive on forward-looking narratives, and few tools amplify those narratives more than artificial intelligence. As investors increasingly turn to
Share
Timestabloid2026/03/29 02:05
U.S. Futures Rise After Trump Appears To Soften Tone On China

U.S. Futures Rise After Trump Appears To Soften Tone On China

The post U.S. Futures Rise After Trump Appears To Soften Tone On China appeared on BitcoinEthereumNews.com. Topline U.S. stock futures rose early on Monday as President Donald Trump and Vice President JD Vance signaled they are open to a deal with China to de-escalate trade tensions, after the president threatened to impose an additional 100% tariff on Chinese goods on Friday in response to Beijing’s expansion of export controls on critical rare earth minerals. U.S. President Donald Trump speaks to the press before boarding Air Force One for a trip to the Middle East. Getty Images Key Facts In premarket trading early on Monday, Dow Futures rose nearly 1% to 46,143 points, while the benchmark S&P 500 Futures climbed more than 1.3% to 6,682.50 points. The tech-centric Nasdaq Futures index saw the biggest bump, rising 1.85% to 24,840 points. Shares of chipmaker Nvidia rose 3.49% to $189.55 in the premarket, while rivals AMD and Broadcom were up 4.17% and 3.42% respectively. However, the prospect of renewed trade tensions weighed on Asian stocks on Monday morning as Trump’s tariff announcement was made after markets closed for the week in Asia on Friday. Hong Kong’s benchmark Hang Seng index closed 1.52% down on Monday, while the Shenzhen Composite and Shanghai Composite indices dropped 0.93% and 0.19% respectively. What Did Trump Say About A Deal With China? In a Truth Social post on Sunday afternoon, Trump appeared to soften his tone on China, saying: “Don’t worry about China, it will all be fine! Highly respected President Xi just had a bad moment. He doesn’t want Depression for his country, and neither do I. The U.S.A. wants to help China, not hurt it!!!” What Did Vice President Vance Say About A Deal With China? While appearing on Fox News’s Sunday Morning Futures, Vance suggested Trump’s latest tariff threat was a negotiating tactic. “It’s going to be a delicate dance, and…
Share
BitcoinEthereumNews2025/10/13 19:33
Sends Strengthens Industry Connections at Pay360 2026

Sends Strengthens Industry Connections at Pay360 2026

Sends, a UK-based fintech and authorised Electronic Money Institution (EMI), announced a successful presence as exhibitor and sponsor at PAY360 2026, held at ExCeL
Share
Techbullion2026/03/29 02:42