The post ‘CopyPasta’ Attack Shows How Prompt Injections Could Infect AI at Scale appeared on BitcoinEthereumNews.com. In brief HiddenLayer researchers detailed a new AI “virus” that spreads through coding assistants. The CopyPasta attack uses hidden prompts disguised as license files to replicate across code. A researcher recommends runtime defenses and strict reviews to block prompt injection attacks at scale. Hackers can now weaponize AI coding assistants using nothing more than a booby-trapped license file, turning developer tools into silent spreaders of malicious code. That’s according to a new report from cybersecurity firm HiddenLayer, which shows how AI can be tricked into blindly copying malware into projects. The proof-of-concept technique—dubbed the “CopyPasta License Attack”—exploits how AI tools handle common developer files like LICENSE.txt and README.md. By embedding hidden instructions, or “prompt injections,” into these documents, attackers can manipulate AI agents into injecting malicious code without the user ever realizing it. “We’ve recommended having runtime defenses in place against indirect prompt injections, and ensuring that any change committed to a file is thoroughly reviewed,” Kenneth Yeung, a researcher at HiddenLayer and the report’s author, told Decrypt. CopyPasta is considered a virus rather than a worm, Yeung explained, because it still requires user action to spread. “A user must act in some way for the malicious payload to propagate,” he said.  Despite requiring some user interaction, the virus is designed to slip past human attention by exploiting the way developers rely on AI agents to handle routine documentation. “CopyPasta hides itself in invisible comments buried in README files, which developers often delegate to AI agents or language models to write,” he said. “That allows it to spread in a stealthy, almost undetectable way.” CopyPasta isn’t the first attempt at infecting AI systems. In 2024, researchers presented a theoretical attack called Morris II, designed to manipulate AI email agents into spreading spam and stealing data. While the attack had… The post ‘CopyPasta’ Attack Shows How Prompt Injections Could Infect AI at Scale appeared on BitcoinEthereumNews.com. In brief HiddenLayer researchers detailed a new AI “virus” that spreads through coding assistants. The CopyPasta attack uses hidden prompts disguised as license files to replicate across code. A researcher recommends runtime defenses and strict reviews to block prompt injection attacks at scale. Hackers can now weaponize AI coding assistants using nothing more than a booby-trapped license file, turning developer tools into silent spreaders of malicious code. That’s according to a new report from cybersecurity firm HiddenLayer, which shows how AI can be tricked into blindly copying malware into projects. The proof-of-concept technique—dubbed the “CopyPasta License Attack”—exploits how AI tools handle common developer files like LICENSE.txt and README.md. By embedding hidden instructions, or “prompt injections,” into these documents, attackers can manipulate AI agents into injecting malicious code without the user ever realizing it. “We’ve recommended having runtime defenses in place against indirect prompt injections, and ensuring that any change committed to a file is thoroughly reviewed,” Kenneth Yeung, a researcher at HiddenLayer and the report’s author, told Decrypt. CopyPasta is considered a virus rather than a worm, Yeung explained, because it still requires user action to spread. “A user must act in some way for the malicious payload to propagate,” he said.  Despite requiring some user interaction, the virus is designed to slip past human attention by exploiting the way developers rely on AI agents to handle routine documentation. “CopyPasta hides itself in invisible comments buried in README files, which developers often delegate to AI agents or language models to write,” he said. “That allows it to spread in a stealthy, almost undetectable way.” CopyPasta isn’t the first attempt at infecting AI systems. In 2024, researchers presented a theoretical attack called Morris II, designed to manipulate AI email agents into spreading spam and stealing data. While the attack had…

‘CopyPasta’ Attack Shows How Prompt Injections Could Infect AI at Scale

2025/09/05 11:10
3분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

In brief

  • HiddenLayer researchers detailed a new AI “virus” that spreads through coding assistants.
  • The CopyPasta attack uses hidden prompts disguised as license files to replicate across code.
  • A researcher recommends runtime defenses and strict reviews to block prompt injection attacks at scale.

Hackers can now weaponize AI coding assistants using nothing more than a booby-trapped license file, turning developer tools into silent spreaders of malicious code. That’s according to a new report from cybersecurity firm HiddenLayer, which shows how AI can be tricked into blindly copying malware into projects.

The proof-of-concept technique—dubbed the “CopyPasta License Attack”—exploits how AI tools handle common developer files like LICENSE.txt and README.md. By embedding hidden instructions, or “prompt injections,” into these documents, attackers can manipulate AI agents into injecting malicious code without the user ever realizing it.

“We’ve recommended having runtime defenses in place against indirect prompt injections, and ensuring that any change committed to a file is thoroughly reviewed,” Kenneth Yeung, a researcher at HiddenLayer and the report’s author, told Decrypt.

CopyPasta is considered a virus rather than a worm, Yeung explained, because it still requires user action to spread. “A user must act in some way for the malicious payload to propagate,” he said.

Despite requiring some user interaction, the virus is designed to slip past human attention by exploiting the way developers rely on AI agents to handle routine documentation.

“CopyPasta hides itself in invisible comments buried in README files, which developers often delegate to AI agents or language models to write,” he said. “That allows it to spread in a stealthy, almost undetectable way.”

CopyPasta isn’t the first attempt at infecting AI systems. In 2024, researchers presented a theoretical attack called Morris II, designed to manipulate AI email agents into spreading spam and stealing data. While the attack had a high theoretical success rate, it failed in practice due to limited agent capabilities, and human review steps have so far prevented such attacks from being seen in the wild.

While the CopyPasta attack is a lab-only proof of concept for now, researchers say it highlights how AI assistants can become unwitting accomplices in attacks.

The core issue, researchers say, is trust. AI agents are programmed to treat license files as important, and they often obey embedded instructions without scrutiny. That opens the door for attackers to exploit weaknesses—especially as these tools gain more autonomy.

CopyPasta follows a string of recent warnings about prompt injection attacks targeting AI tools.

In July, OpenAI CEO Sam Altman warned about prompt injection attacks when the company rolled out its ChatGPT agent, noting that malicious prompts could hijack an agent’s behavior. This warning was followed in August, when Brave Software demonstrated a prompt injection flaw in Perplexity AI’s browser extension, showing how hidden commands in a Reddit comment could make the assistant leak private data.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/338143/copypasta-attack-shows-prompt-injections-infect-ai-scale

시장 기회
Prompt 로고
Prompt 가격(PROMPT)
$0.03155
$0.03155$0.03155
-0.62%
USD
Prompt (PROMPT) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!