AI tools are used for spam, not cybercrime: study

Applications of AI


A study led by researchers from Cambridge, Edinburgh and Strathclyde has found that artificial intelligence (AI) is not turning hackers into “supercharged hackers” as previous assumptions had suggested. Instead, they argued in a new academic paper that hackers primarily use AI tools such as ChatGPT to write spam and generate nudes.

The study, titled “Stand-Alone Complex or Vibercrime?”, was published on Arxiv in March by authors Jack Hughes, Ben Collier, and Daniel Thomas. In this paper, the researchers aim to investigate how the “cybercrime underground” is actually deploying artificial intelligence compared to the claims of cybersecurity vendors.

“We present here one of the first attempts at a mixed-methods empirical study of early patterns of GenAI adoption in the cybercrime underground,” the researchers said.

The research team examined 97,895 forum threads published after the launch of ChatGPT in November 2022. These threads were taken from the Cambridge Cybercrime Center’s CrimeBB dataset, which focuses on underground and dark web forums. The team applied topic modeling techniques to closely analyze 3,203 threads that ethnographically engaged the scene. In other words, threads are interacting directly with the forum community.

They found that 97.3% of the threads in the sample (or 95,292 out of 97,895 forums) were classified as “other,” meaning they were not using AI for crime, and only 1.9% involved the Vibe coding tool.

Additionally, posts about “dark AI” products, typically advertised as jailbroken LLMs, focused on users requesting free access and voicing concerns and complaints about AI tools that don’t work. One of the developers of a well-known dark AI service finally admitted that the tool was a marketing ploy.

“Dark AI has been the subject of a large amount of cybersecurity coverage in the press, along with threat marketing by cybersecurity companies. Additionally, this tooling, which includes various tools such as WormGPT and AI products for penetration testing such as the open source project (now a commercial product) WhiteRabbitNeo, has been the subject of a large number of requests for free access on forums,” the study states. “There is little discussion in our dataset about how (or whether) these tools can be useful, such as automating criminal scripting elements of cybercrime, learning them, or assisting in the development of malware or code.”

Another part of the investigation details Anthropic’s August 2025 report, which alleges Claude code was used to carry out a “vibehacking” extortion campaign against 17 organizations in healthcare, emergency services, government, and religious institutions. But the Cambridge team’s data doesn’t show that pattern across the wider subsurface.

In the forums surveyed, AI coding assistants were used just as they are used by mainstream developers, as autocomplete tools and as an alternative to Stack Overflow for already proficient programmers. They stated that people with lower skills tend to rely on off-the-shelf scripts due to their efficiency.

“AI-assisted coding is a double-edged sword. It speeds up development, but it also amplifies risks such as insecure code and supply chain vulnerabilities,” one user said on a forum surveyed by the researchers.

“The use of AI…is not much different from the way the hacker community coded previously, i.e. criminal users reuse code written by others with little modification, and hacker forum users with a genuine interest in learning use it primarily for non-criminal software engineering projects (supported by these discussions, the positive story about using LLM for coding is mostly related to people adopting it for legitimate day jobs or hobby projects),” the researchers explained.

Is AI actually helping criminals?

Another finding from this study shows that scammers are using LLM to send spam and chase dwindling ad revenue. In particular, romance scammers use AI tools like eWhoring to scam victims out of money through voice cloning and image generation.

The most alarming market they discovered was for nude image generation services. One operator even advertised, “With AI, you can make any girl nude…1 photo = $1, 10 = $8, 50 = $40, 90 = $75.”

But as they said, none of this is sophisticated cybercrime. This is the same low-margin, high-volume work promoted by the spam industry, only now performed on automated tools.

Finally, the researchers said that AI is not being used to cause widespread disruption in cybercrime. Rather, it merely “replaces existing methods such as pasting code, checking for errors, and referencing cheat sheets for common aspects of software development primarily related to cybercrime.”

See full study details at this link.

For artificial intelligence (AI) to function properly within the law and succeed in the face of growing challenges, it must integrate enterprise blockchain systems that ensure the quality and ownership of data input. This allows you to keep your data safe while ensuring data immutability. Check out CoinGeek’s coverage Learn more about this new technology Why enterprise blockchain is the backbone of AI.

See | Can AI be trusted? How blockchain and IPv6 can solve accountability

Frameborder=”0″ allowed=”Accelerometer; Autoplay; Clipboard writing; Encrypted media; Gyroscope; Picture-in-picture; Web sharing” Referrerpolicy=”strict-origin-when-cross-origin”allowfullscreen>



Source link