The study found that some routers inject malicious code, extract credentials like private keys and cloud tokens, and access plaintext data by terminating TLS connections between users and providers like OpenAI, Anthropic, and Google. Testing revealed cases of credential access and at least one instance of Ether being drained from a test wallet using a compromised key.
New Crypto Theft Risk Found
Researchers from the University of California uncovered a critical security risk in the artificial intelligence ecosystem, and warned that certain third-party large language model (LLM) routers could expose users to serious vulnerabilities, including cryptocurrency theft. Their findings shed some light on a growing concern around the hidden risks in the AI supply chain, particularly as developers rely on intermediary services to connect to major AI providers.
The study examined malicious intermediary attacks, and identified multiple attack vectors that could compromise sensitive information. Among the most alarming discoveries was that some routers were actively injecting malicious tool calls into AI workflows, while others were capable of extracting credentials like private keys and cloud access tokens. According to co-author Chaofan Shou, a big number of these routers were quietly engaging in credential theft without users’ awareness.
At the core of the issue is how these routers operate. Acting as intermediaries between users and major AI providers like OpenAI, Anthropic, and Google, they terminate Transport Layer Security (TLS) connections. This process allows them to access all transmitted data in plaintext, effectively placing them in a position of complete visibility over sensitive interactions.
For developers working with AI coding agents, especially in areas like smart contracts or crypto wallets, this creates a dangerous scenario where private keys, seed phrases, and credentials could be unintentionally exposed.
To test these risks, researchers evaluated dozens of paid and hundreds of free routers sourced from public communities. The results were striking.
Several routers were found injecting malicious code, while others accessed confidential cloud credentials. In one instance, a router successfully used a compromised private key to drain Ether from a test wallet. Although the financial loss in the controlled experiment was minimal, the implications for real-world applications are quite severe.
Multi-hop LLM router supply chain (Source: arxiv.org)
The study also revealed that even routers that appear safe can become dangerous over time. Through what researchers described as “poisoning,” previously benign systems may reuse leaked credentials, amplifying the threat across the network. Making things even more difficult is the difficulty in detecting malicious behavior, as routers are expected to handle sensitive data as part of their normal function, making the boundary between legitimate processing and theft nearly invisible.
Another risk factor is the rise of automation features like “YOLO mode,” where AI agents execute commands without user confirmation. In such environments, malicious instructions can be carried out instantly, increasing the likelihood of exploitation. Researchers warn that some routers could be silently compromised without operators realizing it, while free services may deliberately lure users with low-cost access while harvesting valuable data.
The findings clearly prove that there is an urgent need for stronger safeguards. Developers are advised to avoid transmitting sensitive information through AI systems and to implement stricter client-side protections.
Source: https://coinpaper.com/16184/ai-router-flaw-exposes-crypto-wallets-to-theft








