The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this… The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this…

Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns

2025/10/02 07:21

In brief

  • Agents that update themselves can drift into unsafe actions without external attacks.
  • A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models.
  • Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks.

An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

A new kind of drift

Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Small-scale signals of bigger risks

Researchers often frame advanced AI dangers in scenarios such as the “paperclip analogy,” in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Partial fixes, persistent drift

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

The paper proposed more robust strategies for future systems: post-training safety corrections after self-evolution, automated verification of new tools, safety nodes on critical workflow paths, and continuous auditing rather than one-time checks to counter safety drift over time.

The findings raise practical questions for companies building autonomous AI. If an agent deployed in production continually learns and rewrites itself, who is responsible for monitoring its changes? The paper’s data showed that even the most advanced base models can degrade when left to their own devices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/342484/self-evolving-ai-agents-unlearn-safety-study-warns

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0,03734
$0,03734$0,03734
-%0,10
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

AI Startup Surge Risks Repeating Tech’s Last Funding Mania

AI Startup Surge Risks Repeating Tech’s Last Funding Mania

The AI startup frenzy and FOMO are inflating round sizes and valuations. Yes, the potential is huge. But too much capital too early often leads to mediocre outcomes. Remake of 2020–22?
Share
Hackernoon2025/09/19 12:14
Bitcoin ETFs Revive with $241 Million Inflow, Ethereum ETFs Report Lowest Trading Value of the Week

Bitcoin ETFs Revive with $241 Million Inflow, Ethereum ETFs Report Lowest Trading Value of the Week

The post Bitcoin ETFs Revive with $241 Million Inflow, Ethereum ETFs Report Lowest Trading Value of the Week appeared first on Coinpedia Fintech News On September 24, the US spot Bitcoin ETF saw a combined inflow of $241.00 million, while Ethereum ETFs continued their day 3 streak of outflow. It recorded a total net outflow of $79.36 million, as per the SoSoValue report.  Bitcoin ETF Breakdown  After two consecutive days of experiencing huge sell-offs, Bitcoin ETFs finally managed to record an inflow of $241.00 million. BlackRock IBIT led with $128.90 million, and Ark and 21Shares ARKB followed with $37.72 million.  Additional gains were made by Fidelity FBTC, Bitwise BITB, and Grayscale BTC of $29.70 million, $24.69 million, and $13.56 million, respectively. VanEck HODL also made a smaller addition of $6.42 million in inflows.  Despite the inflows, the total trading value of the Bitcoin ETF dropped to $2.58 billion, with total net assets $149.74 billion. This marks 6.62% of Bitcoin market cap, slightly higher than the previous day.  Ethereum ETF Breakdown  Ethereum ETFs saw a total outflow of $79.36 million, with Fidelity’s FETH leading with $33.26 million. BlackRock ETHA also experienced heavy selling pressure of $26.47 million, followed by Grayscale’s ETHE $8.91 million. 21Shares TETH and Bitwise ETHW also posted smaller withdrawals of $6.24 million and $4.48 million, respectively.  The total trading value of Ethereum ETFs dropped below a billion, reaching $971.79 million. Net assets came in at $27.42 billion, representing 5.45% of the Ethereum market cap.  Ethereum ETF Market Context  Bitcoin is trading at $111,766, signalling a 4.6% drop compared to a week ago. Its market cap has also dipped to $2.225 trillion. Its daily trading volume has reached $49.837 billion, showing mild progress there.  Ethereum is priced at $4,011.92, with a market cap of $483.822 billion, showing a sharp decline. Its trading volume has also slipped to $37.680 billion, reflecting a slow market.  Due to heavy outflow this week, Bitcoin and Ethereum’s prices are experiencing price swings. Crypto analysts from Bloomberg warn the market to brace for further volatility.  
Share
Coinstats2025/09/25 18:40
Son of filmmaker Rob Reiner charged with homicide for death of his parents

Son of filmmaker Rob Reiner charged with homicide for death of his parents

FILE PHOTO: Rob Reiner, director of "The Princess Bride," arrives for a special 25th anniversary viewing of the film during the New York Film Festival in New York
Share
Rappler2025/12/16 09:59