AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases. Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.Read more AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases. Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.Read more

Criminals are ‘vibe hacking’ with AI at unprecedented levels: Anthropic

2025/08/28 11:02
1분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

AI company Anthropic warns its AI chatbot Claude is being used to perform large-scale cyberattacks, with ransoms exceeding $500,000 in some cases.

Despite “sophisticated” guardrails, AI infrastructure firm Anthropic says cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. 

In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein shared several cases where criminals had misused the Claude chatbot, with some attacks demanding over $500,000 in ransom.

They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.

Read more

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!