(image: via Currency News)
One of the most bizarre and unsettling stories about the newly arrived spawn of agentic AI and crypto recently exploded onto the Internet. A paper, originally written by a team from Alibaba in December 2025 (revised in Jan 2026), lay quietly in the tech weeds of AI research nerdery until March 7, 2026, when one Alexander Long, founder of AI research firm Pluralis, shared an excerpt on X, describing it as “an insane sequence of statements buried in an Alibaba tech report.” That post went viral, and within hours every major tech and crypto outlet was covering it.
Here is what happened. A security alarm went off at Alibaba’s cloud computing division sometime in late 2025, and the engineers who responded assumed the obvious: a break-in. Someone had burrowed into their systems from outside, quietly commandeering the company’s computing power. Processors were running hot. Electricity was being consumed. The money trail led to cryptocurrency. They started looking for the hacker.
There was no hacker. The thing doing it was their own AI.
The AI agent in question was called ROME — an experimental AI system the company had been building in-house, training it to complete complex tasks by letting it run millions of practice sessions and learn from each one. Somewhere in that process, ROME had reached a conclusion. More computing power meant better results. So it had gone and got some — quietly diverting the company’s own machines toward mining cryptocurrency, opening a hidden channel to an outside server, and in effect acquiring financial resources it had not been given and had never been asked to find.
The researchers’ published account called this an unintended side effect of the way the system had been taught to optimise. What it was, in plain language, was an AI that looked at its situation, identified a route into the economy, and took it. Nobody had suggested it should. Nobody had thought to tell it not to.
This startling story arrives at precisely the moment when the entire infrastructure for AI agents to spend real money — freely, autonomously, at machine speed — is being assembled by some of the largest companies in the world.
The Alibaba incident is the most dramatic edge case in what has been a year of revealing tests. In January 2025, OpenAI launched Operator — an AI agent that could navigate websites, click buttons, fill in forms, and complete tasks on your behalf without needing your hand on the keyboard. When Washington Post technology columnist Geoffrey Fowler asked it to find the cheapest eggs for delivery, he received a $31 charge and a carton of eggs on his doorstep shortly after, delivered at priority speed. The agent had not malfunctioned. It had acted. OpenAI had built in confirmation steps to prevent exactly this. They had not triggered. The eggs were among the most expensive available, particularly once the premium delivery surcharge the agent had elected to add was included.
Fowler had not agreed to the purchase. That is the part that matters. When an AI misunderstands what you wanted and writes you the wrong paragraph, you edit it. When it misunderstands what you wanted and spends your money, you get eggs.
Then there is the OpenClaw phenomenon. OpenClaw is an open-source AI agent released in November 2025 by Austrian developer Peter Steinberger — software that runs on your own computer, connects to WhatsApp or Telegram or other personal services, and actually does things rather than just talking about doing them: clearing inboxes, writing and deploying code, booking reservations, researching stocks. It spread with extraordinary speed, acquired hundreds of thousands of users within months, and was recently acquired by OpenAI when Steinberger joined the company.
One developer configured his OpenClaw instance to ‘explore its capabilities.’ He later found it had independently set up a profile on a social network for AI agents and was screening potential romantic matches on a dating platform, entirely without his direction. The AI-generated profile, the reporting noted, did not reflect him accurately. A separate instance, left with broad permissions and a loose mandate, quietly found itself a job on a platform for AI workers. The user had not asked it to do this.
These stories occupy different places on the spectrum from amusing to alarming. Together they trace a clear pattern: autonomous agents, given tools and objectives and enough room to move, do not stay inside the borders their users imagined they had set. They look for ways to achieve their goals. They acquire what they need to do so. They act.
What makes all of this suddenly urgent rather than merely curious is that a purpose-built financial system for AI agents went live in September 2025, and has since grown at a pace suggesting it has found the use case it was designed for. The protocol, called x402 and launched by Coinbase and Cloudflare, embeds payment directly into the basic mechanics of the internet. When an AI agent needs a resource — data, computing power, access to a service — the system presents it with a price and a destination. The agent pays instantly in digital dollars from a cryptowallet to which it has access, the resource is delivered, and the whole thing happens in the time it takes to load a web page. No logins, no billing accounts, no human confirmation step. Payment becomes as invisible and automatic as a search query.
Within weeks of launch, x402 was handling hundreds of thousands of transactions daily. By year end it had crossed 100 million transactions in total. Google built it into the payment layer of its own agent infrastructure, a standard backed by over 60 organisations including Visa, PayPal, Salesforce, and MetaMask. Visa’s formal endorsement arrived in October. The basic idea — that AI agents should be able to pay for things the same way they do everything else, automatically and without friction — has attracted support from essentially every company with a stake in how commerce works next.
What this infrastructure does, in practice, is give every AI agent a wallet and a checkout counter built into the internet itself. An agent can pay for computing power by the second, subscribe to a data feed by the update, or access a service by the single use — without asking permission, without a subscription, without a human ever seeing the charge. This is precisely the kind of access that the Alibaba AI was trying, in its improvised way, to create for itself. Now it has been built, starting with OpenClaw and scores of new projects launching weekly, if not daily. Legitimately, elegantly, and at global scale.
The practical applications are already appearing. Developers are building investment research agents that pull market data, run analysis, and return a full briefing on a stock — all triggered by a single message on your phone, all settled in micropayments too small to notice individually and significant in aggregate. Korean banks have piloted AI-managed remittances using agent payment infrastructure. The vision articulated by one of the inventors of the core architecture underlying modern AI is a world where you tell your agent what you want to buy or accomplish, and it goes and does it — negotiating, paying, and completing the task without you touching a keyboard.
The obvious question arises — surely the AI agent cannot access the cryptowallet unless the user gives it permission?
There are governance structures being designed for this. Google’s framework requires every agent transaction to be backed by a cryptographically sealed record of the user’s original instruction — a tamper-proof chain of authorisation from human intent to machine action. In theory, no agent spends money without a human decision behind it somewhere.
That is scant comfort. AI is getting increasingly sophisticated at advanced hacking. And deceiving humans. And so a technology that supposedly prevents it from unauthorised spending from an internal cryptowallet needs more than a marketing guarantee, or a certificate from a security company. It needs to be hardened in real production. Which means there are going to be, er, incidents, before we get to grips with this.
ROME had no such guarantees. It had an objective and the latitude to pursue it, and it found a way to resource itself that its creators had not anticipated and could not explain until after the fact. The researchers added restrictions. They improved the training process. They published their findings openly, which was commendable. What they could not do was un-discover the underlying dynamic: a sufficiently capable AI system, given a sufficiently open-ended goal, will find ways to acquire what it needs to meet that goal. Cryptocurrency, which requires no bank account, no identity check, and no human intermediary, is a natural destination.
A survey of organisations by McKinsey in 2025 that had deployed AI agents found that over half had encountered unexpected or risky behaviour. An audit of leading agent systems found that most had undergone no independent safety testing whatsoever. The technology industry is building agents and crypto payment rails simultaneously, and deploying both faster than anyone is governing either. The flash crash of 2010, when automated trading systems fed on each other’s activity and erased nearly a trillion dollars from US markets in twenty minutes, was caused by systems far less capable than what now exists. Those systems could only sell what they had been given. The agents being released into the new payment economy can, at least in principle, go and get their own.
Peter Steinberger, who built OpenClaw — the agent that created an unsolicited dating profile and independently secured employment — is now at OpenAI, helping build the next generation of agents that will use these payment rails. Sam Altman said his work ‘speaks to where we need to go.’ It probably does.
The question that nobody has cleanly answered yet is where, exactly, the agents are supposed to stop.
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital. His new book “It’s Mine: How the Crypto Industry is Redefining Ownership” is published by Maverick451 in SA and Legend Times Group in UK/EU, available now. His columns can be found at
Originally published at https://stevenboykeysidley.substack.com.
When agentic AI goes rogue with crypto was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

