TLDR Anthropic’s AI agents discovered and exploited vulnerabilities in blockchain code, simulating a theft of $4.6 million. The tests focused on real-world smart contracts across Ethereum, Binance Smart Chain, and Base from 2020 to 2025. Anthropic introduced a new benchmark called SCONE-bench to measure exploits by the dollar amount stolen rather than the number of [...] The post Anthropic AI Agents Steal $4.6 Million in Blockchain Exploit Tests appeared first on CoinCentral.TLDR Anthropic’s AI agents discovered and exploited vulnerabilities in blockchain code, simulating a theft of $4.6 million. The tests focused on real-world smart contracts across Ethereum, Binance Smart Chain, and Base from 2020 to 2025. Anthropic introduced a new benchmark called SCONE-bench to measure exploits by the dollar amount stolen rather than the number of [...] The post Anthropic AI Agents Steal $4.6 Million in Blockchain Exploit Tests appeared first on CoinCentral.

Anthropic AI Agents Steal $4.6 Million in Blockchain Exploit Tests

TLDR

  • Anthropic’s AI agents discovered and exploited vulnerabilities in blockchain code, simulating a theft of $4.6 million.
  • The tests focused on real-world smart contracts across Ethereum, Binance Smart Chain, and Base from 2020 to 2025.
  • Anthropic introduced a new benchmark called SCONE-bench to measure exploits by the dollar amount stolen rather than the number of bugs detected.
  • In total, AI agents broke into 207 contracts, pulling $550.1 million in simulated theft across 405 tested contracts.
  • Opus 4.5, Sonnet 4.5, and GPT-5 led the attack, with Opus 4.5 alone stealing $4.5 million from 17 contracts.

Anthropic’s recent research reveals that its AI agents were able to exploit vulnerabilities in blockchain code, stealing $4.6 million from simulated contracts. The company’s tests tracked real smart contract attacks across Ethereum, Binance Smart Chain, and Base, spanning from 2020 to 2025. These findings underscore the growing threat of AI-driven cyberattacks targeting blockchain systems.

Anthropic AI Agents Steal $4.6 Million in Tests

Anthropic’s tests focused on smart contracts, which power cryptocurrency transactions without human intervention. Every flaw in the code is a potential gateway for theft, as all contract lines are publicly available. In one test, AI agents discovered and exploited bugs within an hour, leading to millions in simulated losses.

The company used a new benchmark, SCONE-bench, to measure the dollar amounts stolen during the simulated attacks. “We are focused on the monetary impact rather than just the number of bugs detected,” Anthropic explained. The agents worked under a strict timeline, with one hour to find a flaw, exploit it, and surpass a set crypto balance threshold.

Across 405 contracts tested, 51.1% were successfully compromised. In total, the AI agents pulled $550.1 million in simulated theft. Of the frontier models tested, Opus 4.5, Sonnet 4.5, and GPT-5 were able to exploit 19 contracts, stealing $4.6 million. Opus 4.5 led the charge, pulling $4.5 million alone.

Uncovering New Exploits with AI Agents

Anthropic also pushed its AI agents to identify new, previously unknown vulnerabilities in live contracts. Using contracts deployed on Binance Smart Chain between April and October 2025, the AI agents uncovered two zero-day flaws. These new vulnerabilities netted $3,694 in simulated revenue.

One flaw stemmed from a missing view tag in a public calculator function. The AI agents exploited this by inflating the token supply and selling them for a profit. “The flaw could have paid close to $19,000 during peak liquidity,” Anthropic noted.

The second flaw involved broken fee handling in a token launcher. The AI agents exploited this by withdrawing trading fees, resulting in a real-world attack that drained around $1,000. Within four days, the bug was fixed after the AI discovered it, illustrating the speed at which vulnerabilities can be exploited.

Cost and Efficiency of AI Exploits

The research also analyzed the cost-effectiveness of using AI for blockchain attacks. A full scan by GPT-5 across 2,849 contracts averaged $1.22 per run. Detecting each vulnerable contract cost $1,738, with average exploit revenue reaching $1,847. The net profit from these exploits averaged $109 per run.

As technology improves, the cost of performing AI-driven exploits continues to decrease. “Over the past year, the cost of executing a successful exploit has dropped by more than 70%,” Anthropic stated. This reduction has made it increasingly easier for attackers to scale their operations, pulling 3.4 times more exploits for the same amount of compute power.

These findings show how quickly AI agents can detect, exploit, and profit from vulnerabilities in smart contracts. The research also highlights the financial incentives driving these attacks, as well as the increasing sophistication of AI-driven cybercrime.

The post Anthropic AI Agents Steal $4.6 Million in Blockchain Exploit Tests appeared first on CoinCentral.

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.