WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of generative AI in facilitating such attacks.
How Generative AI is Revolutionising BEC Attacks
The progression of artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, has introduced a new vector for business email compromise (BEC) attacks. ChatGPT, a sophisticated AI model, generates human-like text based on the input it receives. Cybercriminals can use such technology to automate the creation of highly convincing fake emails, personalised to the recipient, thus increasing the chances of success for the attack.
Consider the first image above, where a recent discussion thread unfolded on a cybercrime forum. In this exchange, a cybercriminal showcased the potential of harnessing generative AI to refine an email that could be used in a phishing or BEC attack. They recommended composing the email in one’s native language, translating it, and then feeding it into an interface like ChatGPT to enhance its sophistication and formality. This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks.
Moving on to the second image above, we’re now seeing an unsettling trend among cybercriminals on forums, evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT. These “jailbreaks” are specialised prompts that are becoming increasingly common. They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.
Finally, in the third image above, we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes. Not only are they creating these custom modules, but they are also advertising them to fellow bad actors. This shows how cybersecurity is becoming more challenging due to the increasing complexity and adaptability of these activities in a world shaped by AI.
Uncovering WormGPT: A Cybercriminal’s Arsenal
Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.
WormGPT is an AI module based on the GPTJ language model, which was developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.
As depicted above, WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author.
As you can see in the screenshot above, we conducted tests focusing on BEC attacks to comprehensively assess the potential dangers associated with WormGPT. In one experiment, we instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice.
The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.
In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.
Benefits of Using Generative AI for BEC Attacks
So, what specific advantages does using generative AI confer for BEC attacks?
Exceptional Grammar: Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious.
Lowered Entry Threshold: The use of generative AI democratises the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.
Ways of Safeguarding Against AI-Driven BEC Attacks
In conclusion, the growth of AI, while beneficial, brings progressive, new attack vectors. Implementing strong preventative measures is crucial. Here are some strategies you can employ:
BEC-Specific Training: Companies should develop extensive, regularly updated training programs aimed at countering BEC attacks, especially those enhanced by AI. Such programs should educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers. This training should also be incorporated as a continuous aspect of employee professional development.
Enhanced Email Verification Measures: To fortify against AI-driven BEC attacks, organisations should enforce stringent email verification processes. These include implementing systems that automatically alert when emails originating outside the organisation impersonate internal executives or vendors, and using email systems that flag messages containing specific keywords linked to BEC attacks like “urgent”, “sensitive”, or “wire transfer”. Such measures ensure that potentially malicious emails are subjected to thorough examination before any action is taken.
About the Author
Daniel Kelley is a reformed black hat computer hacker who collaborated with our team at SlashNext to research the latest threats and tactics employed by cybercriminals, particularly those involving BEC, phishing, smishing, social engineering, ransomware, and other attacks that exploit the human element. source
“ChatGPT with no ethical boundaries”: WormGPT fuels AI-generated scams
Cybercriminals are increasingly using generative AI tools such as ChatGPT or WormGPT, a dedicated malware model, to send highly convincing fake emails to organizations, allowing them to bypass security measures.
A new wave of highly convincing fake emails is hitting unsuspecting employees. That’s according to British computer hacker Daniely Kelley, who has been researching WormGPT, an AI tool optimized for cybercrimes such as Business Email Compromise (BEC). Kelley is referring specifically to his observations on underground forums.
In addition, special prompts known as “jailbreaks” are exchanged to manipulate models such as ChatGPT to generate output that could include the disclosure of sensitive information or the execution of malicious code.
“This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks,” Kelley writes.
AI-generated emails are grammatically correct, which Kelley says makes them more likely to go undetected, and easy-to-use AI models have lowered the barrier to entry for attacks.
WormGPT is an AI model optimized for online fraud
WormGPT is an AI model specifically designed for criminal and malicious activity that is shared on popular online cybercrime forums. It is marketed as a “blackhat” alternative to official GPT models and advertised with privacy protection and “fast money”.
Similar to ChatGPT, WormGPT can create convincing and strategically sophisticated emails. This makes it a powerful tool for sophisticated phishing and BEC attacks. Kelley describes it as ChatGPT with “no ethical boundaries or restrictions”.
WormGPT is based on the open-source GPT-J model, which approaches the performance of GPT-3 and can perform textual tasks similar to ChatGPT, as well as write or format simple code. The WormGPT derivative is said to have been trained with additional malware datasets, although the author of the hacking tool does not disclose these.

Kelley tested WormGPT on a phishing email designed to trick a customer service representative into paying an urgent bogus bill. The sender of the email was the CEO of the targeted company (see screenshot).
Kelley calls the results of the experiment “unsettling” and the fraudulent email generated “remarkably persuasive, but also strategically cunning.” Even inexperienced cybercriminals could pose a significant threat with a tool like WormGPT, Kelley writes.
The best protection against AI-based BEC attacks is prevention
As AI tools continue to proliferate, new attack vectors will emerge, making strong prevention measures essential, Kelley says. Organizations should develop BEC-specific training and implement enhanced email verification measures to protect against AI-based BEC attacks.
For example, Kelley cites alerts for emails from outside the organization that impersonate managers or suppliers. Other systems could flag messages with keywords such as “urgent,” “sensitive,” or “wire transfer,” which are associated with BEC attacks.
- Cybercriminals are using generative AI applications such as OpenAI’s ChatGPT and WormGPT to create convincing fake emails for corporate fraud attacks.
- WormGPT is an AI model designed specifically for criminal and malicious activity and is promoted as a “blackhat” alternative to official GPT models. It has allegedly been fine-tuned using malware data.
- To protect against AI-based fraud attacks, organizations should develop BEC-specific training and implement enhanced email verification measures.
Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”
Image: SlashNext.com.
The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.
WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.
“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.
“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”
A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.
“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.
Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.
Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.
That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.
AN OPEN BOOK
Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.
“You can ask me anything,” Morais said. “I’m an open book.”
Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.
“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”
WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.
“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”
Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.
“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”
Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.
“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”
WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.
“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”
It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”
But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.
“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”
Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).
“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.
Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.
“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”
Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”
“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.
It’s unclear yet whether Last’s customers share that view. source
The dark side of ChatGPT: Hackers tap WormGPT and FraudGPT for sophisticated attacks
A new “blackhat” breed of generative artificial intelligence (AI) tools such as WormGPT and FraudGPT are reshaping the security landscape. With their potential for malicious use, these sophisticated models could amplify the scale and effectiveness of cyberattacks.
WormGPT: The blackhat version of ChatGPT
WormGPT, based on the GPTJ language model developed in 2021, essentially functions as a blackhat counterpart to OpenAI’s ChatGPT, but without ethical boundaries or limitations, according to SlashNext’s recent research.
The tool was allegedly trained on a broad range of data sources, with a particular focus on malware-related data. However, the precise datasets used in the training process remain confidential. And its developers boasted it offers features such as character support, chat memory retention and code formatting capabilities.
The SlashNext gained access to WormGPT through a prominent online forum often associated with cybercrime and conducted tests focusing on business email compromise (BEC) attacks.
In one instance, WormGPT was instructed to generate an email aimed at coercing an unsuspecting account manager into paying a fraudulent invoice. “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” the team wrote in a blog post.
FraudGPT goes on offense
FraudGPT is a similar AI tool as WormGPT, which is marketed exclusively for offensive operations, such as spear phishing emails, creating cracking tools and carding (a type of credit card fraud).
The Netenrich threat research team discovered this new AI bot, which is now being sold across various dark web marketplaces and on the Telegram platform. Besides crafting enticing and malicious emails, the team highlighted the tool’s capability to identify the most targeted services or sites, thereby enabling further exploitation of victims.
FraudGPT’s developers claim it has diverse features including writing malicious code, creating undetectable malware, phishing pages and hacking tools; finding non-VBV bins; and identifying leaks and vulnerabilities. They also touted it has more than 3000 confirmed sales or reviews.
WormGPT and FraudGPT are still in their infancy
While FraudGPT and WormGPT are similar to ChatGPT in terms of capabilities and technology, the key difference is these dark versions have no guardrails or limitations and are trained on stolen data, Forrester Senior Analyst Allie Mellen told SDxCentral.
“The only difference will be the goal of the particular groups using these platforms — some will use it for phishing/financial fraud and others will use it to attempt to gain access to networks via other means,” echoed HYAS CTO David Mitchell.
FraudGPT and WormGPT have been getting attention since July. Mellen pointed out it’s still in the early stage for attackers to use these tools and too early to tell at this point if there is a demand.
“It’s getting some interest from the cybercriminal community, of course, but it’s more experimental than it is mainstream,” she said. “We will see new variations pop up that make use of the data or that are trained on different types of data to make it even more useful for them.”
How and where hackers can harness malicious GPT tools
Mellen noted tools like FraudGPT and WormGPT will serve as a force multiplier and helper for attackers, but only when used correctly.
“Much like any other generative AI use case, it’s important to note that these tools can be a force multiplier for users, but ultimately only if they know how to use them and only if they’re willing to use them,” she said.
Here are some potential use cases Mellen listed:
- Enhanced phishing campaigns: One of the most basic uses is crafting phishing emails. Tools like WormGPT and FraudGPT present a new advantage in effective translation into various languages, which can ensure that it’s not only understandable but also enticing for the target, leading to higher chances of them clicking malicious links. Moreover, these tools make it easier for attackers to automate phishing campaigns at scale, eliminating manual effort.
- Accelerated open source intelligence (OSINT) gathering: Typically, attackers invest significant time in OSINT, where they gather information about their targets, such as personal or family details, company information and past history, aiding their social engineering efforts. With the introduction of tools like WormGPT and FraudGPT, this research process is substantially hastened by simply inputting a series of questions and directives into the tool without having to go through the manual work.
- Automated malware generation: WormGPT and FraudGPT are proving useful in generating code, and this capability can be extended to malware creation. Especially with platforms like FraudGPT, which might have access to previously hacked information on the dark web, attackers can simplify and expedite the malware creation process. Even individuals with limited technical expertise can prompt these AI tools to generate malware, thereby lowering the entry barrier into cybercrime.
“Historically, these attacks could often be detected via security solutions like anti-phishing and protective DNS [domain name system ] platforms,” HYAS’ Mitchell said in a statement. “With the evolution happening within these dark GPTs [Generative Pre-trained Transformers], organizations will need to be extra vigilant with their email & SMS messages because they provide an ability for a non-native English speaker to generate well-formed text as a lure.”
Threat landscape impacts
With all these emerging use cases, there will be more targeted attacks, Mellen noted. Phishing attempts will become more sophisticated and the rapid generation of malicious code by AI will likely result in more duplicate malware.
“I’d expect that we’ll see more consistency with some of the malware that’s in use, which will cause some issues because it has the potential to even more so obfuscate nation-state activity as people copy and use whatever it is that they can find, whatever it is that ChatGPT gets trained on,” she said.
“So there’s a lot of potential that we’ll see an increase in attacker activity, especially on the cybercriminal side as people who perhaps are not as sophisticated on the technology side or previously thought that the being cybercriminals, and accessible to them now have more opportunity there, unfortunately,” she added.
Don’t panic yet, but stay informed genAI
Despite these daunting impacts, it’s important not to panic, Mellen said, adding that much like any technological advancement, GPT tools can be a double-edged sword.
“It can be used for really positive things. It can also be used for really awful things. So it’s another thing that CISO needs to consider and be concerned about,” she said. “But at the end of the day, it’s just another tool and so don’t go too crazy trying to change everything that you do, just make sure that the tools that you’re using are protecting you as best you can and keep up to date with the current landscape.”
Mellen recommended organizations pay attention to and keep informed of the generative AI developments and what attackers are doing with it. “Understanding as much as we can now and keeping up to date on the actions that they’re taking is pivotal.” source
WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’
The developer of WormGPT is selling access to the chatbot, which can help hackers create malware and phishing attacks, according to email security provider SlashNext.
“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said.
Indeed, the bot crafted a message using professional language that urged the intended victim to wire some money. WormGPT also wrote the email without any spelling or grammar mistakes—red flags that can indicate a phishing email attack.
“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” SlashNext said. “This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”
Fortunately, WormGPT isn’t cheap. The developer is selling access to the bot for 60 Euros per month or 550 Euros per year. One buyer has also complained that the program is “not worth any dime,” citing weak performance. Still, WormGPT is an ominous sign about how generative AI programs could fuel cybercrime, especially as the programs mature. source
WormGPT and its Malicious Nature
-
Violation of Laws: Using WormGPT for cybercriminal activities goes against laws governing hacking, data theft, and other illegal activities.
-
Creation of Malware and Phishing Attacks: WormGPT can be used to develop malware and phishing attacks that harm individuals and organizations.
-
Sophisticated Cyberattacks: WormGPT empowers cybercriminals to launch advanced cyberattacks, causing substantial damage to computer systems and networks.
-
Facilitation of Illegal Activities: WormGPT enables cybercriminals to carry out illegal activities with ease, putting innocent individuals and organizations at risk.
-
Legal Consequences: Individuals engaging in cybercriminal activities using WormGPT may face legal repercussions and potential criminal charges.