Skip to content

Increased Discussions on dangerous AI technologies on the unregulated internet by 200%

Researchers from Kela observe a surge in discussions on the dark web concerning malicious AI tools, with an increase exceeding 200%

Increased Discussions About Dangerous AI Techniques on the Underground Web Jumped by 200%
Increased Discussions About Dangerous AI Techniques on the Underground Web Jumped by 200%

Increased Discussions on dangerous AI technologies on the unregulated internet by 200%

In a stark warning to the cybersecurity community, a new report by threat intelligence firm Kela has revealed a seismic shift in the landscape of cyber threats. The report, titled 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology, highlights the increasing use of artificial intelligence (AI) tools by threat actors for malicious activities.

One such malicious AI tool is WormGPT, a variant of the GPT-J Large Language Model (LLM) that has been used for activities like business email compromise (BEC) and phishing. However, the report suggests that this is just the tip of the iceberg, as threat actors are increasingly using LLM-based Generative AI (GenAI) tools to automate various aspects of cybercrime.

These tools are being used to accelerate the attack cycle by automating scanning and analysis of vulnerabilities (pen testing). They are also being used to optimize identity fraud, including the use of deepfake tools to bypass verification checks. The report further reveals that cybercriminals are enhancing malware and exploit development, including infostealers and ransomware, using LLM-based GenAI tools.

The report also notes that these 'dark AI' tools have evolved into AI-as-a-Service (AIaaS), offering subscription-based AI tools to cybercriminals. This has lowered the entry barriers, enabling scalable attacks like phishing, deepfakes, and fraud scams. In fact, the report shows a 219% increase in mentions of malicious AI tools and tactics.

Sadly, the report also uncovers instances of cybercriminals scamming their peers with fake versions of AIaaS tools. For example, in 2024, organized crime groups exploited AI tools for scalable fraud operations, including deep fakes and voice cloning, enhancing phishing attacks tailored with cultural and personal precision across Europe.

The study also shows a 52% increase in discussions related to jailbreaking legitimate AI tools like ChatGPT. Kela refers to these malicious AI tools as 'dark AI'. These AIaaS tools can be either jailbroken versions of publicly available GenAI tools or customized open source LLMs.

Threat actors are also using LLM-based GenAI tools to automate and enhance the sophistication of phishing/social engineering, including via deepfake audio and video. This makes it increasingly difficult for victims to distinguish between genuine and malicious communications.

In response to this growing threat, the report urges organizations to adopt AI-driven defenses to combat the growing threat of AI-powered cybercrime. As the use of AI in cybercrime continues to evolve, it is clear that the cybersecurity community must adapt and evolve alongside it to ensure the safety and security of individuals and organizations alike.

Read also:

Latest