The post Postmortems Can’t Stop AI-Powered Crypto Fraud appeared on BitcoinEthereumNews.com. Opinion by: Danor Cohen, co-founder and chief technology officer of Kerberus In 2025, crypto risk is a torrent. AI is turbocharging scams. Deepfake pitches, voice clones, synthetic support agents — all of these are no longer fringe tools but frontline weapons. Last year, crypto scams likely hit a record high. Crypto fraud revenues reached at least $9.9 billion, partly driven by generative AI-enabled methods. Meanwhile, in 2025, more than $2.17 billion has been stolen — and that’s just in the first half of the year. Personal-wallet compromises now account for nearly 23% of stolen-fund cases. Still, the industry essentially responds with the same stale toolkit: audits, blacklists, reimbursement promises, user awareness drives and post-incident write-ups. These are reactive, slow and ill-suited for a threat that evolves at machine speed. AI is crypto’s alarm bell. It’s telling us just how vulnerable the current structure is. Unless we shift from patchwork reaction to baked-in resilience, we risk a collapse not in price, but in trust. AI has reshaped the battlefield Scams involving deepfakes and synthetic identities have stepped from novelty headlines to mainstream tactics. Generative AI is being used to scale lures, clone voices and trick users into sending funds. The most significant shift isn’t simply a matter of scale. It’s the speed and personalization of deception. Attackers can now replicate trusted environments or people almost instantly. The shift toward real-time defense must also quicken — not just as a feature but as a vital part of infrastructure. Outside of the crypto sector, regulators and financial authorities are waking up. The Monetary Authority of Singapore published a deepfake risk advisory to financial institutions, signaling that systemic AI deception is on its radar. The threat has evolved; the industry’s security mindset has not. Reactive security leaves users as walking targets Security in crypto… The post Postmortems Can’t Stop AI-Powered Crypto Fraud appeared on BitcoinEthereumNews.com. Opinion by: Danor Cohen, co-founder and chief technology officer of Kerberus In 2025, crypto risk is a torrent. AI is turbocharging scams. Deepfake pitches, voice clones, synthetic support agents — all of these are no longer fringe tools but frontline weapons. Last year, crypto scams likely hit a record high. Crypto fraud revenues reached at least $9.9 billion, partly driven by generative AI-enabled methods. Meanwhile, in 2025, more than $2.17 billion has been stolen — and that’s just in the first half of the year. Personal-wallet compromises now account for nearly 23% of stolen-fund cases. Still, the industry essentially responds with the same stale toolkit: audits, blacklists, reimbursement promises, user awareness drives and post-incident write-ups. These are reactive, slow and ill-suited for a threat that evolves at machine speed. AI is crypto’s alarm bell. It’s telling us just how vulnerable the current structure is. Unless we shift from patchwork reaction to baked-in resilience, we risk a collapse not in price, but in trust. AI has reshaped the battlefield Scams involving deepfakes and synthetic identities have stepped from novelty headlines to mainstream tactics. Generative AI is being used to scale lures, clone voices and trick users into sending funds. The most significant shift isn’t simply a matter of scale. It’s the speed and personalization of deception. Attackers can now replicate trusted environments or people almost instantly. The shift toward real-time defense must also quicken — not just as a feature but as a vital part of infrastructure. Outside of the crypto sector, regulators and financial authorities are waking up. The Monetary Authority of Singapore published a deepfake risk advisory to financial institutions, signaling that systemic AI deception is on its radar. The threat has evolved; the industry’s security mindset has not. Reactive security leaves users as walking targets Security in crypto…

Postmortems Can’t Stop AI-Powered Crypto Fraud

2025/11/05 10:32
6분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Opinion by: Danor Cohen, co-founder and chief technology officer of Kerberus

In 2025, crypto risk is a torrent. AI is turbocharging scams. Deepfake pitches, voice clones, synthetic support agents — all of these are no longer fringe tools but frontline weapons. Last year, crypto scams likely hit a record high. Crypto fraud revenues reached at least $9.9 billion, partly driven by generative AI-enabled methods.

Meanwhile, in 2025, more than $2.17 billion has been stolen — and that’s just in the first half of the year. Personal-wallet compromises now account for nearly 23% of stolen-fund cases.

Still, the industry essentially responds with the same stale toolkit: audits, blacklists, reimbursement promises, user awareness drives and post-incident write-ups. These are reactive, slow and ill-suited for a threat that evolves at machine speed.

AI is crypto’s alarm bell. It’s telling us just how vulnerable the current structure is. Unless we shift from patchwork reaction to baked-in resilience, we risk a collapse not in price, but in trust.

AI has reshaped the battlefield

Scams involving deepfakes and synthetic identities have stepped from novelty headlines to mainstream tactics. Generative AI is being used to scale lures, clone voices and trick users into sending funds.

The most significant shift isn’t simply a matter of scale. It’s the speed and personalization of deception. Attackers can now replicate trusted environments or people almost instantly. The shift toward real-time defense must also quicken — not just as a feature but as a vital part of infrastructure.

Outside of the crypto sector, regulators and financial authorities are waking up. The Monetary Authority of Singapore published a deepfake risk advisory to financial institutions, signaling that systemic AI deception is on its radar.

The threat has evolved; the industry’s security mindset has not.

Reactive security leaves users as walking targets

Security in crypto has long relied on static defenses, including audits, bug bounties, code audits and blocklists. These tools are designed to identify code weaknesses, not behavioral deception.

While many AI scams focus on social engineering, it’s also true that AI tools are increasingly used to find and exploit code vulnerabilities, scanning thousands of contracts automatically.

The risk is twofold: technical and human.

When we rely on blocklists, attackers simply spin up new wallets or phantom domains. When we depend on audits and reviews, the exploit is already live. And when we treat every incident as a “user error,” we absolve ourselves of responsibility for systemic design flaws.

Related: Crisis management for CEX during a cybersecurity threat

In traditional finance, banks can block, reverse or freeze suspicious transactions. In crypto, a signed transaction is final. And that finality is one of crypto’s crowning features and becomes its Achilles’ heel when fraud is instantaneous.

Moreover, we often advise users: “Don’t click unknown links” or “Verify addresses carefully.” These are acceptable best practices, but today’s attacks usually arrive from trusted sources.

No amount of caution can keep pace with an adversary that continuously adapts and personalizes attacks in real time.

Embed protection into the fabric of transaction logic

It’s time to evolve from defense to design. We need transaction systems that react before damage is done.

Consider wallets that detect anomalies in real time and not just flag suspicious behavior but also intervene before harm occurs. That means requiring extra confirmations, holding transactions temporarily or analyzing intent: Is this to a known counterparty? Is the amount out of pattern? Does the address indicate a history of previous scam activity?

Infrastructure should support shared intelligence networks. Wallet services, nodes and security providers should exchange behavioral signals, threat address reputations and anomaly scores with each other. Attackers shouldn’t be able to hop across silos unimpeded.

Likewise, contract-level fraud detection frameworks scrutinize contract bytecode to flag phishing, Ponzi or honeypot behaviors in smart contracts. Again, these are retrospective or layered tools. What’s critical now is moving these capabilities into user workflows — into wallets, signing processes and transaction verification layers.

This approach doesn’t demand heavy AI everywhere; it requires automation, distributed detection loops and coordinated consensus about risk, all embedded in the transaction lanes.

If crypto doesn’t act, it loses the narrative

Let regulators define fraud protection architecture, and we’ll end up constrained. But they’re not waiting. Regulators are effectively preparing to regulate financial deception as part of algorithmic oversight.

If crypto doesn’t voluntarily adopt systemic protections, regulation will impose them — likely through rigid frameworks that curtail innovation or enforce centralized controls. The industry can either lead its own evolution or have it legislated for it.

From defense to assurance

Our job is to restore confidence. The goal is not to make hacks impossible but to make irreversible loss intolerable and exceedingly rare.

We need “insurance-level” behavior: transactions that are effectively monitored, with fallback checks, pattern fuzzing, anomaly pause logic and shared threat intelligence built in. Wallets should no longer be dumb signing tools but active participants in risk detection.

We must challenge dogmas. Self-custody is necessary but not sufficient. We should stop treating security tools as optional — they must be the default. Education is valuable, but design is decisive.

The next frontier isn’t speed or yield; it’s fraud resilience. Innovation should flow not from how fast blockchains settle, but from how reliably they prevent malicious flows.

Yes, AI has exposed weak spots in crypto’s security model. But the threat isn’t smarter scams; it’s our refusal to evolve.

The answer isn’t to embed AI in every wallet; it’s to build systems that make AI-powered deception unprofitable and unviable.

If defenders stay reactive, issuing postmortems and blaming users, deception will continue to outpace defense.

Crypto doesn’t need to outsmart AI in every battle; it must outgrow it by embedding trust.

Opinion by: Danor Cohen, co-founder and chief technology officer of Kerberus.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Source: https://cointelegraph.com/news/ai-systems-crypto-fraud-while-the-industry-relies-on-outdated-postmortems-real-time-transaction-defense-must-become-infrastructure?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

시장 기회
스레숄드 로고
스레숄드 가격(T)
$0.006064
$0.006064$0.006064
-2.06%
USD
스레숄드 (T) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!