Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing our latest documentary, ‘The A Word’, which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.
Read more
An autonomous artificial intelligence agent in China has been caught hijacking computing power in order to secretly mine cryptocurrency, researchers have revealed.
The experimental AI agent ROME, developed by research teams affiliated with the tech giant Alibaba, broke free of its parameters during routine training to carry out rogue operations.
The unauthorised actions were initially flagged as a security incident, before the researchers realised that the AI had bypassed firewalls independently without permission.
The researchers discovered that the artificial intelligence had quietly diverted computing power away from its training to use for cryptocurrency mining, despite not receiving any prompts to undertake this action.
“Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security-policy violations originating from our training servers,” the researchers noted.
“The alerts were severe and heterogeneous, including attempts to probe or access internal-network resources and traffic patterns consistent with cryptomining-related activity.”
The researchers said the incident demonstrated the “markedly underdeveloped” safety guardrails concerning the controllability of agentic large language models (LLMs).
The results were detailed in a paper, titled ‘Let it flow: Agentic crafting on rock and roll, building the Rome model within an open agentic learning ecosystem’, though the breach was only mentioned briefly within the 36-page report.
AI and machine learning expert Alexander Long described the findings as an “insane sequence of statements hidden” within the report.
The Independent has reached out to Alibaba for comment.
It is not the first AI agent to exhibit rogue behaviours during training, with some even acting outside their intended boundaries in the real world.
In 2024, Air Canada was forced to refund a customer after an AI-powered chatbot Moffatt offered to reimburse an airfare despite it being against the airline’s policy.
Last year, Anthropic researchers revealed how its frontier model Claude Opus 4 had resorted to blackmail in order to avoid being shut down.
Anthropic researcher Aengus Lynch said at the time that such extreme behaviours were more widespread than previously assumed.
“It’s not just Claude,” he said in a post to X. “We see blackmail across all frontier models – regardless of what goals they’re given.”
Source: https://www.independent.co.uk/tech/security/ai-crypto-mining-alibaba-b2934990.html


