The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this… The post Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns appeared on BitcoinEthereumNews.com. In brief Agents that update themselves can drift into unsafe actions without external attacks. A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models. Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks. An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems. The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently. As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor. A new kind of drift Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles. In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.  The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University. Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this…

Self-Evolving AI Agents Can ‘Unlearn’ Safety, Study Warns

2025/10/02 07:21
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

In brief

  • Agents that update themselves can drift into unsafe actions without external attacks.
  • A new study documents guardrails weakening, reward-hacking, and insecure tool reuse in top models.
  • Experts warn these dynamics echo small-scale versions of long-imagined catastrophic AI risks.

An autonomous AI agent that learns on the job can also unlearn how to behave safely, according to a new study that warns of a previously undocumented failure mode in self-evolving systems.

The research identifies a phenomenon called “misevolution”—a measurable decay in safety alignment that arises inside an AI agent’s own improvement loop. Unlike one-off jailbreaks or external attacks, misevolution occurs spontaneously as the agent retrains, rewrites, and reorganizes itself to pursue goals more efficiently.

As companies race to deploy autonomous, memory-based AI agents that adapt in real time, the findings suggest these systems could quietly undermine their own guardrails—leaking data, granting refunds, or executing unsafe actions—without any human prompt or malicious actor.

A new kind of drift

Much like “AI drift,” which describes a model’s performance degrading over time, misevolution captures how self-updating agents can erode safety during autonomous optimization cycles.

In one controlled test, a coding agent’s refusal rate for harmful prompts collapsed from 99.4% to 54.4% after it began drawing on its own memory, while its attack success rate rose from 0.6% to 20.6%. Similar trends appeared across multiple tasks as the systems fine-tuned themselves on self-generated data.

The study was conducted jointly by researchers at Shanghai Artificial Intelligence Laboratory, Shanghai Jiao Tong University, Renmin University of China, Princeton University, Hong Kong University of Science and Technology, and Fudan University.

Traditional AI-safety efforts focus on static models that behave the same way after training. Self-evolving agents change this by adjusting parameters, expanding memory, and rewriting workflows to achieve goals more efficiently. The study showed that this dynamic capability creates a new category of risk: the erosion of alignment and safety inside the agent’s own improvement loop, without any outside attacker.

Researchers in the study observed AI agents issuing automatic refunds, leaking sensitive data through self-built tools, and adopting unsafe workflows as their internal loops optimized for performance over caution.

The authors said that misevolution differs from prompt injection, which is an external attack on an AI model. Here, the risks accumulated internally as the agent adapted and optimized over time, making oversight harder because problems may emerge gradually and only appear after the agent has already shifted its behavior.

Small-scale signals of bigger risks

Researchers often frame advanced AI dangers in scenarios such as the “paperclip analogy,” in which an AI maximizes a benign objective until it consumes resources far beyond its mandate.

Other scenarios include a handful of developers controlling a superintelligent system like feudal lords, a locked-in future where powerful AI becomes the default decision-maker for critical institutions, or a military simulation that triggers real-world operations—power-seeking behavior and AI-assisted cyberattacks round out the list.

All of these scenarios hinge on subtle but compounding shifts in control driven by optimization, interconnection, and reward hacking—dynamics already visible at a small scale in current systems. This new paper presents misevolution as a concrete laboratory example of those same forces.

Partial fixes, persistent drift

Quick fixes improved some safety metrics but failed to restore the original alignment, the study said. Teaching the agent to treat memories as references rather than mandates nudged refusal rates higher. The researchers noted that static safety checks added before new tools were integrated cut down on vulnerabilities. Despite these checks, none of these measures returned the agents to their pre-evolution safety levels.

The paper proposed more robust strategies for future systems: post-training safety corrections after self-evolution, automated verification of new tools, safety nodes on critical workflow paths, and continuous auditing rather than one-time checks to counter safety drift over time.

The findings raise practical questions for companies building autonomous AI. If an agent deployed in production continually learns and rewrites itself, who is responsible for monitoring its changes? The paper’s data showed that even the most advanced base models can degrade when left to their own devices.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/342484/self-evolving-ai-agents-unlearn-safety-study-warns

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0,02164
$0,02164$0,02164
-1,99%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!