Thu. Jun 19th, 2025

“ChatGPT with no ethical boundaries”: WormGPT fuels AI-generated scams

 

Cybercriminals are increasingly using generative AI tools such as ChatGPT or WormGPT, a dedicated malware model, to send highly convincing fake emails to organizations, allowing them to bypass security measures.

A new wave of highly convincing fake emails is hitting unsuspecting employees. That’s according to British computer hacker Daniely Kelley, who has been researching WormGPT, an AI tool optimized for cybercrimes such as Business Email Compromise (BEC). Kelley is referring specifically to his observations on underground forums.

In addition, special prompts known as “jailbreaks” are exchanged to manipulate models such as ChatGPT to generate output that could include the disclosure of sensitive information or the execution of malicious code.

“This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks,” Kelley writes.

AI-generated emails are grammatically correct, which Kelley says makes them more likely to go undetected, and easy-to-use AI models have lowered the barrier to entry for attacks.

WormGPT is an AI model optimized for online fraud

WormGPT is an AI model specifically designed for criminal and malicious activity that is shared on popular online cybercrime forums. It is marketed as a “blackhat” alternative to official GPT models and advertised with privacy protection and “fast money”.

Similar to ChatGPT, WormGPT can create convincing and strategically sophisticated emails. This makes it a powerful tool for sophisticated phishing and BEC attacks. Kelley describes it as ChatGPT with “no ethical boundaries or restrictions”.

WormGPT is based on the open-source GPT-J model, which approaches the performance of GPT-3 and can perform textual tasks similar to ChatGPT, as well as write or format simple code. The WormGPT derivative is said to have been trained with additional malware datasets, although the author of the hacking tool does not disclose these.

A phishing email generated with WormGPT.

Kelley tested WormGPT on a phishing email designed to trick a customer service representative into paying an urgent bogus bill. The sender of the email was the CEO of the targeted company (see screenshot).

Kelley calls the results of the experiment “unsettling” and the fraudulent email generated “remarkably persuasive, but also strategically cunning.” Even inexperienced cybercriminals could pose a significant threat with a tool like WormGPT, Kelley writes.

The best protection against AI-based BEC attacks is prevention

As AI tools continue to proliferate, new attack vectors will emerge, making strong prevention measures essential, Kelley says. Organizations should develop BEC-specific training and implement enhanced email verification measures to protect against AI-based BEC attacks.

For example, Kelley cites alerts for emails from outside the organization that impersonate managers or suppliers. Other systems could flag messages with keywords such as “urgent,” “sensitive,” or “wire transfer,” which are associated with BEC attacks.

Summary
  • Cybercriminals are using generative AI applications such as OpenAI’s ChatGPT and WormGPT to create convincing fake emails for corporate fraud attacks.
  • WormGPT is an AI model designed specifically for criminal and malicious activity and is promoted as a “blackhat” alternative to official GPT models. It has allegedly been fine-tuned using malware data.
  • To protect against AI-based fraud attacks, organizations should develop BEC-specific training and implement enhanced email verification measures.

source

 


Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’

WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”

Image: SlashNext.com.

The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.

WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.

“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

WormGPT’s core developer and frontman “Last” promoting the service on HackForums. Image: SlashNext.

In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.

“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

SlashNext asked WormGPT to compose this BEC phishing email. Image: SlashNext.

A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.

“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.

Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.

Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.

That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.

AN OPEN BOOK

Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.

“You can ask me anything,” Morais said. “I’m an open book.”

Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.

“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”

WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.

“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”

Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.

“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”

Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.

“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”

WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.

“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”

It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”

But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.

“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”

Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).

“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.

Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.

“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”

Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”

“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.

It’s unclear yet whether Last’s customers share that view. source


The dark side of ChatGPT: Hackers tap WormGPT and FraudGPT for sophisticated attacks

https://youtu.be/rqA_LkM0qIk

A new “blackhat” breed of generative artificial intelligence (AI) tools such as WormGPT and FraudGPT are reshaping the security landscape. With their potential for malicious use, these sophisticated models could amplify the scale and effectiveness of cyberattacks.

WormGPT: The blackhat version of ChatGPT

WormGPT, based on the GPTJ language model developed in 2021, essentially functions as a blackhat counterpart to OpenAI’s ChatGPT, but without ethical boundaries or limitations, according to SlashNext’s recent research.

The tool was allegedly trained on a broad range of data sources, with a particular focus on malware-related data. However, the precise datasets used in the training process remain confidential. And its developers boasted it offers features such as character support, chat memory retention and code formatting capabilities.

The SlashNext gained access to WormGPT through a prominent online forum often associated with cybercrime and conducted tests focusing on business email compromise (BEC) attacks.

In one instance, WormGPT was instructed to generate an email aimed at coercing an unsuspecting account manager into paying a fraudulent invoice. “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” the team wrote in a blog post.

FraudGPT goes on offense

FraudGPT is a similar AI tool as WormGPT, which is marketed exclusively for offensive operations, such as spear phishing emails, creating cracking tools and carding (a type of credit card fraud).

The Netenrich threat research team discovered this new AI bot, which is now being sold across various dark web marketplaces and on the Telegram platform. Besides crafting enticing and malicious emails, the team highlighted the tool’s capability to identify the most targeted services or sites, thereby enabling further exploitation of victims.

FraudGPT’s developers claim it has diverse features including writing malicious code, creating undetectable malware, phishing pages and hacking tools; finding non-VBV bins; and identifying leaks and vulnerabilities. They also touted it has more than 3000 confirmed sales or reviews.

WormGPT and FraudGPT are still in their infancy

While FraudGPT and WormGPT are similar to ChatGPT in terms of capabilities and technology, the key difference is these dark versions have no guardrails or limitations and are trained on stolen data, Forrester Senior Analyst Allie Mellen told SDxCentral.

“The only difference will be the goal of the particular groups using these platforms — some will use it for phishing/financial fraud and others will use it to attempt to gain access to networks via other means,” echoed HYAS CTO David Mitchell.

FraudGPT and WormGPT have been getting attention since July. Mellen pointed out it’s still in the early stage for attackers to use these tools and too early to tell at this point if there is a demand.

“It’s getting some interest from the cybercriminal community, of course, but it’s more experimental than it is mainstream,” she said. “We will see new variations pop up that make use of the data or that are trained on different types of data to make it even more useful for them.”

How and where hackers can harness malicious GPT tools

Mellen noted tools like FraudGPT and WormGPT will serve as a force multiplier and helper for attackers, but only when used correctly.

“Much like any other generative AI use case, it’s important to note that these tools can be a force multiplier for users, but ultimately only if they know how to use them and only if they’re willing to use them,” she said.

Here are some potential use cases Mellen listed:

  • Enhanced phishing campaigns: One of the most basic uses is crafting phishing emails. Tools like WormGPT and FraudGPT present a new advantage in effective translation into various languages, which can ensure that it’s not only understandable but also enticing for the target, leading to higher chances of them clicking malicious links. Moreover, these tools make it easier for attackers to automate phishing campaigns at scale, eliminating manual effort.
  • Accelerated open source intelligence (OSINT) gathering: Typically, attackers invest significant time in OSINT, where they gather information about their targets, such as personal or family details, company information and past history, aiding their social engineering efforts. With the introduction of tools like WormGPT and FraudGPT, this research process is substantially hastened by simply inputting a series of questions and directives into the tool without having to go through the manual work.
  • Automated malware generation: WormGPT and FraudGPT are proving useful in generating code, and this capability can be extended to malware creation. Especially with platforms like FraudGPT, which might have access to previously hacked information on the dark web, attackers can simplify and expedite the malware creation process. Even individuals with limited technical expertise can prompt these AI tools to generate malware, thereby lowering the entry barrier into cybercrime.

“Historically, these attacks could often be detected via security solutions like anti-phishing and protective DNS [domain name system ] platforms,” HYAS’ Mitchell said in a statement. “With the evolution happening within these dark GPTs [Generative Pre-trained Transformers], organizations will need to be extra vigilant with their email & SMS messages because they provide an ability for a non-native English speaker to generate well-formed text as a lure.”

Threat landscape impacts

With all these emerging use cases, there will be more targeted attacks, Mellen noted. Phishing attempts will become more sophisticated and the rapid generation of malicious code by AI will likely result in more duplicate malware.

“I’d expect that we’ll see more consistency with some of the malware that’s in use, which will cause some issues because it has the potential to even more so obfuscate nation-state activity as people copy and use whatever it is that they can find, whatever it is that ChatGPT gets trained on,” she said.

“So there’s a lot of potential that we’ll see an increase in attacker activity, especially on the cybercriminal side as people who perhaps are not as sophisticated on the technology side or previously thought that the being cybercriminals, and accessible to them now have more opportunity there, unfortunately,” she added.

Don’t panic yet, but stay informed genAI

Despite these daunting impacts, it’s important not to panic, Mellen said, adding that much like any technological advancement, GPT tools can be a double-edged sword.

“It can be used for really positive things. It can also be used for really awful things. So it’s another thing that CISO needs to consider and be concerned about,” she said. “But at the end of the day, it’s just another tool and so don’t go too crazy trying to change everything that you do, just make sure that the tools that you’re using are protecting you as best you can and keep up to date with the current landscape.”

Mellen recommended organizations pay attention to and keep informed of the generative AI developments and what attackers are doing with it. “Understanding as much as we can now and keeping up to date on the actions that they’re taking is pivotal.” source


WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’

The developer of WormGPT is selling access to the chatbot, which can help hackers create malware and phishing attacks, according to email security provider SlashNext.

WormGPT logo

(Credit: Hacking forum)
A hacker has created his own version of ChatGPT, but with a malicious bent: Meet WormGPT, a chatbot designed to assist cybercriminals.
WormGPT’s developer is selling access to the program in a popular hacking forum, according to email security provider SlashNext, which tried the chatbot. “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes,” the company said in a blog post.

WormGPT screenshot

WormGPT (Credit: Hacking forum)
It looks like the hacker first introduced the chatbot in March before launching it last month. In contrast with ChatGPT or Google’s Bard, WormGPT doesn’t have any guardrails to stop it from responding to malicious requests.
“This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future,” the program’s developer wrote. “Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
WormGPT’s developer has also uploaded screenshots showing you can ask the bot to produce malware written in the Python coding language, and provide tips on crafting malicious attacks. To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021. The model was then trained on data concerning malware creation, resulting in WormGPT.

WormGPT interface

WormGPT interface (Credit: Hacking forum)
When SlashNext tried out WormGPT, the company tested whether the bot could write a convincing email for a business email compromise (BEC) scheme—a type of phishing attack.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said.A screen shot showing WormGPT answering

(Credit: SlashNext)

Indeed, the bot crafted a message using professional language that urged the intended victim to wire some money. WormGPT also wrote the email without any spelling or grammar mistakes—red flags that can indicate a phishing email attack.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” SlashNext said. “This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”

Fortunately, WormGPT isn’t cheap. The developer is selling access to the bot for 60 Euros per month or 550 Euros per year. One buyer has also complained that the program is “not worth any dime,” citing weak performance. Still, WormGPT is an ominous sign about how generative AI programs could fuel cybercrime, especially as the programs mature. source


Introduction
In this article, we will discuss WormGPT and its potential implications for users. However, it is crucial to note that WormGPT is a malicious chatbot designed to assist cybercriminals in carrying out illegal activities. Therefore, it is not recommended to use WormGPT for any purpose. By understanding the risks associated with WormGPT and its potential consequences, we can better appreciate the importance of using technology ethically and responsibly. Let’s explore the features of WormGPT, its differences from other GPT models, and the risks involved.

WormGPT and its Malicious Nature

WormGPT is specifically designed for malicious activities, such as malware coding and exploits. Unlike ChatGPT or Google’s Bard, WormGPT lacks ethical boundaries or limitations to prevent it from responding to malicious requests. It utilizes an AI module based on the open-source GPTJ language model developed in 2021, boasting features like quick responses, unlimited characters, privacy focus, and different AI models.
Risks and Consequences of Using WormGPT
Using WormGPT for malicious purposes entails significant risks and consequences. By employing this chatbot in cybercriminal activities, individuals violate laws related to hacking, data theft, and other illegal activities. The potential outcomes include:
  • Violation of Laws: Using WormGPT for cybercriminal activities goes against laws governing hacking, data theft, and other illegal activities.
  • Creation of Malware and Phishing Attacks: WormGPT can be used to develop malware and phishing attacks that harm individuals and organizations.
  • Sophisticated Cyberattacks: WormGPT empowers cybercriminals to launch advanced cyberattacks, causing substantial damage to computer systems and networks.
  • Facilitation of Illegal Activities: WormGPT enables cybercriminals to carry out illegal activities with ease, putting innocent individuals and organizations at risk.
  • Legal Consequences: Individuals engaging in cybercriminal activities using WormGPT may face legal repercussions and potential criminal charges.

How To Use WormGPT

Even after reading above information if you still want to access and use WormGPT, you can simply visit the WormGPT page here at your own risk.
FAQs
Q: Can I use WormGPT for non-malicious purposes?
Yes, WormGPT can technically be used for non-malicious purposes. However, it is crucial to remember that the development and distribution of WormGPT were intended for malicious activities. Using it for any purpose raises ethical concerns and legal risks.
Q: Are there any ethical boundaries in WormGPT?
No, unlike other GPT models such as ChatGPT or Google’s Bard, WormGPT does not possess ethical boundaries or limitations to prevent it from responding to malicious requests.
Q: What are the differences between WormGPT and other GPT models?
WormGPT is specifically designed for malicious activities like malware coding and exploits. It differs from other GPT models in terms of its intended purpose and lack of ethical boundaries.
Q: Can WormGPT be used to launch cyberattacks?
Yes, WormGPT can be utilized to launch sophisticated cyberattacks that can cause significant damage to computer systems and networks.
Q: What are the risks of using WormGPT?
The risks associated with using WormGPT include violating laws related to hacking, data theft, and other illegal activities, creating malware and phishing attacks, empowering cybercriminals, and potentially facing legal consequences and criminal charges.
Q: How should we use technology responsibly?
To use technology responsibly, it is essential to avoid using tools like WormGPT that can cause harm to others. Ethical and responsible use of technology is crucial to maintain a safe and secure online environment.
Conclusion
In conclusion, WormGPT is a malicious chatbot specifically designed for cybercriminal activities. Using WormGPT for any purpose is not recommended, as it violates laws related to hacking, data theft, and other illegal activities. It is crucial to use technology ethically and responsibly, ensuring that our actions do not cause harm to others. By being aware of the risks associated with WormGPT and similar tools, we can contribute to a safer and more secure digital landscape. source

You May Have Missed

error: Content is protected !!