A new vulnerability in AI coding tools puts developer systems at immediate risk, according to a recent alert from SlowMist, as attackers can now exploit trusted environments without triggering alarms, threatening crypto projects, digital assets, and developer credentials alike.
SlowMist warned that AI coding assistants can be exploited through hidden instructions placed inside common project files like README.md and LICENSE.txt.
The flaw activates when users open a project folder, allowing malware to execute commands on macOS or Windows systems without prompts.
This attack requires no confirmation from the developer, making it dangerous for crypto-related development environments holding sensitive data or wallets.
The attack method, called the “CopyPasta License Attack,” was first disclosed by HiddenLayer in September through extensive research on embedded markdown payloads.
Attackers manipulate how AI tools interpret markdown files by hiding malicious prompts inside comments that AI systems treat as code instructions.
Cursor, a popular AI-assisted coding platform, was confirmed vulnerable, along with Windsurf, Kiro, and Aider, according to HiddenLayer’s technical report.
The malware executes when AI agents read instructions and copy them into the codebase, compromising entire projects silently.
“Developers are exposed even before writing any code,” HiddenLayer said, adding that “AI tools become unintentional delivery vectors.”
Cursor users face the highest exposure, as documented in controlled demonstrations showcasing complete system compromise after basic folder access.
North Korean attackers have increased focus on blockchain developers using new techniques to embed backdoors in smart contracts.
According to Google’s Mandiant team, group UNC5342 deployed malware including JADESNOW and INVISIBLEFERRET across Ethereum and BNB Smart Chain.
The method stores payloads in read-only functions to avoid transaction logs and bypass conventional blockchain tracking.
Developers are unknowingly executing malware simply by interacting with these smart contracts through decentralized platforms or tools.
BeaverTail and OtterCookie, two modular malware strains, were used in phishing campaigns disguised as job interviews with crypto engineers.
The attacks used fake companies like Blocknovas and Softglide to distribute malicious code through NPM packages.
Silent Push researchers traced both firms to vacant properties, revealing they operated as fronts for the “Contagious Interview” malware operation.
Once infected, compromised systems sent credentials and codebase data to attacker-controlled servers using encrypted communication.
Anthropic’s recent testing revealed AI tools exploited half of smart contracts in its SCONE-bench benchmark, simulating $550.1 million in damages.
Claude Opus 4.5 and GPT-5 found working exploits in 19 smart contracts deployed after their respective training cutoffs.
Two zero-day vulnerabilities were identified in active Binance Smart Chain contracts worth $3,694, at a model API cost of $3,476.
The study showed exploit discovery speed doubled monthly, while token costs per working exploit decreased sharply.
Chainabuse reported AI-driven crypto scams rose 456% year-over-year by April 2025, fueled by deepfake videos and voice clones.
Scam wallets received 60% of deposits from AI-generated campaigns featuring convincing fake identities and real-time automated replies.
Attackers now deploy bots to simulate technical interviews and lure developers into downloading disguised malware tools.
Despite these risks, crypto-related hacks fell 60% to $76 million in December from November’s $194.2 million, according to PeckShield.
The post SlowMist Warns AI Coding Tools May Expose Crypto to Silent Attacks appeared first on CoinCentral.


