WormGPT and FraudGPT: The Dark Side of AI-Powered Cybercrime
Share
Since ChatGPT’s launch in November 2022, artificial intelligence has shifted from being a tech novelty to a global productivity engine. ChatGPT alone crossed 100 million users in just two months, and its website saw over 1.6 billion visits in June 2023. From students and writers to developers and business owners, AI is being used for a wide range of tasks—automating workflows, writing emails, building products, and creating side hustles. But the same tools that help drive growth are now being weaponized by cybercriminals.
Malicious actors are actively building and distributing AI clones that bypass safety filters and provide assistance for phishing, malware creation, identity theft, and other illegal operations. Among these, WormGPT and FraudGPT have emerged as leading examples of how generative AI is being abused in the underground economy.
Key Takeaways
- WormGPT and FraudGPT are AI chatbots specifically designed for cybercrime, created to bypass ethical safeguards in models like ChatGPT.
- These tools automate phishing, BEC attacks, malware development, and even provide cybercrime tutorials, making cyberattacks more accessible.
- WormGPT is now a category label, often used to describe any jailbroken large language model (LLM) used for unethical purposes.
- Threat actors are jailbreaking mainstream AI tools like xAI’s Grok and Mistral’s Mixtral, transforming them into crime-friendly tools.
- Businesses must respond proactively by educating teams, investing in email authentication, and tracking new AI threat vectors.
WormGPT: ChatGPT’s Blackhat Counterpart
WormGPT was first discovered by cybersecurity firm SlashNext in July 2023. Built on the GPT-J open-source model released in 2021, this tool is specifically designed for illegal use cases. It gained traction on dark web forums and Telegram groups, marketed as the “uncensored, unethical twin” of ChatGPT. Unlike mainstream AI, it lacks content filters, meaning it will gladly generate phishing emails, fake executive messages, malicious code, or social engineering scripts.
The tool’s creators claim that WormGPT has been trained on datasets rich in malware code, vulnerability descriptions, and exploitation patterns. That makes it more useful for attackers trying to run Business Email Compromise (BEC) scams. BEC involves impersonating high-ranking company officials and tricking employees into transferring funds or leaking confidential data. WormGPT automates this—making the language more convincing, context-aware, and typo-free.
Its second version, which appeared within a month of launch, includes:
- Unlimited character generation
- Syntax-accurate code outputs
- Session history saving
- Customizable tone and writing styles for different attack formats
The danger lies in its simplicity. Even a non-technical individual can launch a convincing attack with minimal inputs.
WormGPT Variants– xzin0vich-WormGPT and keanu-WormGPT
As the original WormGPT gained popularity across underground forums and Telegram channels, threat actors began creating spin-offs that pushed its capabilities further. Two notable variants—xzin0vich-WormGPT and keanu-WormGPT—have since emerged and are now being sold or shared in restricted communities like BreachForums and on dark web marketplaces.
xzin0vich-WormGPT: Tailored for Offensive Security and BEC Campaigns
The name xzin0vich-WormGPT likely references a pseudonym associated with a known forum contributor or AI script modifier active in exploit development circles. This version of WormGPT is fine-tuned specifically for Business Email Compromise (BEC) and targeted credential phishing. What sets it apart:
- Enhanced language mimicry: xzin0vich-WormGPT is trained to copywriting styles of actual executives and customer service reps, using scraped email signatures, jargon, and tone.
- Multi-language output: It supports English, Spanish, Russian, and Arabic, allowing actors to launch multilingual phishing campaigns with region-specific accuracy.
- Dataset enrichment: It’s believed to be fine-tuned on real-world breach data, leaked customer communications, and corporate templates.
- Prompt shielding: The model is wrapped with an intermediate query layer that cleans up suspicious language, helping it evade basic AI output detection systems.
This variant is often sold with a plugin kit that includes ready-made prompts, phishing domain templates, and guides for pairing the model with SMS-based lures or fake login pages.
keanu-WormGPT: Marketed as the ‘Creative’ Criminal’s Tool
The keanu-WormGPT variant takes its name from actor Keanu Reeves, possibly as a branding gimmick to imply smoothness, intelligence, or rebellion (mirroring the character of Neo from The Matrix). Its creators market it as a “more philosophical, unrestricted” AI capable of handling a wider variety of criminal prompts, including those that need plausible deniability or psychological manipulation.
Key capabilities of keanu-WormGPT include:
- Story-driven social engineering: It can create long-form scam narratives designed to manipulate victims emotionally, such as fake inheritance stories, romantic scams, or NGO frauds.
- Malware walkthroughs: While it doesn’t generate malicious code directly in some setups, it provides step-by-step tutorials for using and customizing off-the-shelf RATs (Remote Access Trojans), keyloggers, and crypters.
- Auto-prompt chaining: It can break down complex prompts into sub-tasks, mimicking human logic and decision-making. For instance, if a user asks for a phishing campaign targeting Amazon users, it first suggests data points to collect, then drafts emails, landing pages, and call scripts.
- Custom ethics toggles: This model includes a setting interface—usually in jailbroken deployment GUIs—that allows the user to adjust levels of “creative constraint,” essentially determining how far the model should go in breaking standard boundaries.
While keanu-WormGPT is less popular among traditional malware authors, it’s favored by scammers, propagandists, and dark web marketers who need nuanced, believable AI-generated content with minimal traceability.
Why These Variants Matter
What makes xzin0vich-WormGPT and keanu-WormGPT so dangerous is not just their capabilities—it’s their specialization. Traditional malicious models like WormGPT were broad and general. These two reflect a trend toward AI tool verticalization—where cybercrime groups tailor models for specific roles within their operations.
They’re being marketed not just as tools, but as products:
- With version numbers
- Customer support (via encrypted messaging apps)
- Guides and video walkthroughs
- Regular updates to bypass detection tools and filters.
This approach mimics SaaS startups and makes these tools easy to adopt—even by those with limited technical knowledge.
Why ‘WormGPT’ Is Now a Generic Term
The term “WormGPT” is no longer used just for one specific tool. As Dave Tyson, CIO at Apollo Information Systems, explained, “WormGPT has become a generic reference—just like people say ‘Kleenex’ for tissues.”
In cybercrime communities, WormGPT is now shorthand for:
- Jailbroken AI tools
- Custom-trained malicious LLMs
- Unfiltered variants of mainstream models
- Dark web “AI-as-a-service” offerings
So even if the original WormGPT code is outdated or discontinued, its label now represents a category of tools used to run scalable cybercrime operations.
FraudGPT: Cybercrime as a Subscription Service
FraudGPT surfaced shortly after WormGPT. Unlike open-source models, FraudGPT operates under a subscription model—$200 per month or $1,700 annually. This isn’t just a one-time tool; it’s positioned as a full-fledged service for hackers, scam operators, and aspiring cybercriminals.
According to reports from Netenrich, FraudGPT allows users to:
- Generate targeted phishing emails
- Create malware and keyloggers
- Discover exploitable vulnerabilities
- Get advice on using stolen data
- Identify the best websites to deploy card fraud
Its user interface is designed for ease, with minimal tech skill required. FraudGPT eliminates the learning curve that once deterred amateur hackers. Anyone with money and intent can now use AI to create criminal operations—making cybercrime more scalable and less dependent on technical knowledge.
Why These Tools Matter More Than You Think
The threat here isn’t just about phishing emails or malware. It’s about democratizing cybercrime. These AI tools drastically lower the barrier to entry. In the past, hackers needed to understand coding, security infrastructure, and scripting languages. Now, they simply type a request—just like using ChatGPT—and receive ready-to-use malicious outputs.
Cybersecurity expert Daniel Kelley, a former black-hat hacker, warned that “as public GPT tools continue to add safeguards, criminals will continue building alternatives without such guardrails.” This is no longer hypothetical. Within just two months of WormGPT’s appearance, at least three similar models were spotted—EvilGPT, XXXGPT, and WolfGPT—and more are likely in circulation.
It’s not just about stealing data anymore. These models support:
- Targeted political misinformation campaigns
- Deepfake content creation
- Real-time scam automation via chatbots
- Exploit development that bypasses standard antivirus detection
Jailbreaking LLMs: Turning Mainstream AI into Criminal Tools
A disturbing new trend is gaining ground: threat actors jailbreaking legitimate LLMs. This doesn’t mean they’re writing new models from scratch. Instead, they’re using smart prompt engineering, fine-tuning, and local deployment setups to rewire existing tools for unethical use.
For example:
- xAI’s Grok and Mistral’s Mixtral have been co-opted to create new WormGPT-style variants.
- Forums like BreachForums now list spin-offs like xzin0vich-WormGPT and keanu-WormGPT, often advertised as “no limits” models.
- Jailbreak methods include prompt obfuscation, historical framing, and hidden context injections to bypass filters.
These clones are distributed via private chat services, often packaged as bots or user-facing apps. The model runs behind the scenes, ensuring the attacker can serve customers without exposing their AI backend.
How Criminals Use These Tools Without Getting Caught
Attackers rarely download models directly. Instead, they use tools like:
- LMStudio for local inference
- FlowGPT for chaining models with prompts
- Isolated front-end bots that protect the identity of the model
This indirect model usage creates a buffer between the attacker and the AI model, making attribution harder and takedowns more complex.
Prompts are disguised as:
- Academic research (“write a fictional malware campaign to analyze attack vectors”)
- Historical exploration (“how would malware have worked in 1995?”)
- Code improvement suggestions (“optimize this Python code snippet”)
This tactic is effective. AI tools see nothing inherently harmful in the phrasing, but the intent behind the query is malicious.
What Can Security Teams and Businesses Do?
Businesses planning their budgets for 2025 must think beyond firewalls and endpoint protection. AI-driven threats are agile, scalable, and easily hidden in everyday operations.
Here’s what companies should start doing:
- Conduct phishing simulations tailored to AI-generated scams
- Implement strong email authentication protocols like SPF, DKIM, and DMARC
- Monitor for jailbroken AI usage in internal systems
- Educate employees about AI-assisted social engineering
- Invest in behavior-based detection, not just signature-based tools
- Track underground forums and threat intelligence for emerging AI tools
To Sum Up
AI Isn’t the Problem—Unethical Use Is. AI has immense power to improve workflows, generate ideas, and solve complex problems. But that power can be flipped. Tools like WormGPT and FraudGPT are proof that even the best technologies can be weaponized when ethics are stripped away.
Security leaders, developers, and businesses need to stay alert. Jailbroken LLMs and malicious AI services are no longer rare. They are here, evolving fast, and becoming harder to trace. If we don’t treat AI security as part of core cybersecurity policy, we risk allowing attackers to always stay one step ahead.
