The post Cybercrime Might Be the One Job AI Isn’t Taking, Study Suggests appeared on BitcoinEthereumNews.com. In brief Cambridge, Edinburgh, and Strathclyde researchersThe post Cybercrime Might Be the One Job AI Isn’t Taking, Study Suggests appeared on BitcoinEthereumNews.com. In brief Cambridge, Edinburgh, and Strathclyde researchers

Cybercrime Might Be the One Job AI Isn’t Taking, Study Suggests

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • Cambridge, Edinburgh, and Strathclyde researchers analyzed 97,895 cybercrime forum threads posted after ChatGPT’s launch.
  • “Dark AI” tools like WormGPT generated cultural buzz but produced almost no working malware, while jailbroken chatbots are increasingly hard to keep working for more than a few days.
  • The biggest measurable AI-driven crime is not hacking. It is mass-produced SEO spam, romance scams, and AI-generated nudes sold for a dollar each.

For three years, cybersecurity firms, governments, and AI labs have warned that generative AI would unleash a new generation of supercharged hackers. According to a new academic paper that actually went and looked, the supercharged hackers are mostly using ChatGPT to write spam and generate nudes for fun.

The study, titled Stand-Alone Complex or Vibercrime?, was published on arXiv by researchers from Cambridge and other universities and aims to understand how the cybercrime underground is actually adopting AI, not how cybersecurity vendors say it is.

“We present here one of the first attempts at a mixed-methods empirical study of early patterns of GenAI adoption in the cybercrime underground,” researchers wrote.

The team analyzed 97,895 forum threads posted after ChatGPT launched in November 2022, drawn from the Cambridge Cybercrime Centre’s CrimeBB dataset of underground and dark web forums. They ran topic models, manually read more than 3,200 threads, and ethnographically immersed themselves in the scene.

The conclusion is unflattering for the AI doom community: 97.3% of threads in the sample were classified as “other,” meaning not actually about using AI for crime at all. Only 1.9% involved someone using vibe coding tools.

‘Nothing more than an unrestricted ChatGPT’

Remember WormGPT, FraudGPT, and the wave of supposedly malicious chatbots that flooded headlines in 2023? The forum data tells a different story.

Most posts about “Dark AI” products, the researchers found, were people begging for free access, idle speculation, and complaints that the tools didn’t actually work. One developer of a popular Dark AI service eventually admitted to forum members that the product was a marketing exercise.

“At the end of the day, [CybercrimeAI] is nothing more than an unrestricted ChatGPT,” the developer wrote, before the project shut down. “Anyone on the Internet can use a well-known jailbreak technique and achieve the same, if not better, results.”

By late 2024, the researchers say, jailbreaks for mainstream models had become disposable. Most stop working in a week or less. Open-source models can be jailbroken indefinitely, but they are slow, resource-heavy, and frozen in time.

“Guardrails for AI systems are proving both useful and effective,” the authors conclude, in what they themselves call a counterintuitive finding for a critical paper.

Vibe coding is real. Vibe hacking, mostly, is not

The paper directly addresses Anthropic’s widely-covered August 2025 report claiming Claude Code had been used to run a “vibe hacking” extortion campaign against 17 organizations. The Cambridge team’s data simply does not show that pattern in the wider underground.

In the forums they studied, AI coding assistants are being used the same way mainstream developers use them: as autocomplete and Stack Overflow replacements for already-skilled coders. Low-skill actors stick with pre-made scripts, because pre-made scripts work.

The researchers found that even hackers don’t trust their vibe coded hacking tools. “AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities,” one user said in a forum monitored by researchers.

Another warned about long-term skill loss: “It’s clear now that using AI for code causes a very fast negative degradation of your skills,” a hacker wrote in a forum, “If your goal is just to turn out SaaS scams and you don’t care about code quality/security/performance it can be viable to vibe code. (Also seems viable for phishing).”

This stands in stark contrast to alarmist forecasts from Europol, which warned in 2025 that fully autonomous AI could one day control criminal networks.

Where AI is actually helping criminals

The disruption, when it shows up, is at the bottom of the food chain.

SEO scammers are using LLMs to mass-produce blog spam to chase declining ad revenue. Romance fraudsters and eWhoring operators are bolting on voice cloning and image generation. Get-rich-quick hustlers are churning out AI-written eBooks to sell for $20 a pop.

The most disturbing market the researchers found involved nude image generation services. One operator advertised: “I’m able to make any girl nude with an AI… 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75.”

None of this is sophisticated cybercrime. It is the same low-margin, high-volume hustle that powered the spam industry for two decades, now running on slightly better tools.

The researchers’ closing observation is the most pointed one. The biggest way AI ends up disrupting the cybercrime ecosystem, they suggest, may not be by making criminals more capable. It may be by pushing laid-off developers from legitimate tech into the underground looking for work.

“In recent months anxiety over labour market disruption from these tools is increasing precipitously,” the paper reads. “This may end up being the most important way in which generative AI tools disrupt the cybercrime ecosystem—mass layoffs, economic downturn and a cool job market pushing legitimate, more skilled developers into the underground communities of get rich quick schemes, fraud, and cybercrime.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/366855/cybercrime-hacking-ai-study

Market Opportunity
Gensyn Logo
Gensyn Price(AI)
$0.03252
$0.03252$0.03252
-4.07%
USD
Gensyn (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

Starter Gold Rush: Win $2,500!

Starter Gold Rush: Win $2,500!Starter Gold Rush: Win $2,500!

Start your first trade & capture every Alpha move