Key Points:
- Cracked versions of legitimate generative pre-trained transformer (GPT) models began surfacing on dark web forums and popular hacker telegram channels in mid 2023, giving threat actors a way to use generative artificial intelligence (AI) for nefarious purposes where ethical boundaries and safeguards didn’t exist.
- So far, the actual performance of these tools has fallen short of their advertised capabilities, often producing outdated, insufficient, or irrelevant outputs.
- Cybercriminals with lackluster language proficiencies will likely be the most avid users of dark AI, as these models will allow them to better articulate business email compromises (BECs) or phishing-related attacks without key grammatical errors often detected by targeted individuals. There will also likely be an influx of novice criminals looking to partake in small-scale crimes while avoiding the barriers of entry into the cybercriminal world.
Summary:
Technological advancements are plentiful in the world of cybersecurity and are often seen as a double-edged sword, a sentiment that holds true in the rise of generative AI. As AI gains traction for its versatility and power, nefarious counterparts are also rising in the shadows. The introduction of useful tools like ChatGPT also comes with rogue creations like WormGPT and FraudGPT, platforms that exist in the dark realms of the internet and have the potential to alter the cybersecurity landscape.
The legitimate generative AI models currently available to the public are built with extensive safeguards that prevent users from producing harmful, illegal, dangerous, or unethical content. However, threat actors are constantly pursuing new means of exploiting architectural vulnerabilities to achieve their goals. As is the case with AI, cybercriminals have been observed offering methods of obtaining illegal AI-enabled digital tools that avoid any such barriers that may stand in the user’s way, giving them the ability to generate whatever malicious content they desire. The proliferation of these unauthorized models, stripped of ethical safeguards, allows threat actors access to sophisticated tools and the ability to execute cybercrimes with unprecedented ease.
The Advent of Cracked GPTs in the Wild
Three significant players have emerged that offer malicious alternatives to GPT models: WormGPT, FraudGPT, and DarkBERT.
WormGPT was launched on July 22, 2023[1] and is based on an open-source large language model (LLM) called GPT-J 6B, developed by EleutherAI[2] in 2021[3]. GPT-J has the capacity for unlimited character support, chat memory retention, and code formatting capabilities. WormGPT was allegedly trained on this LLM through a broad spectrum of data sources, primarily utilizing malware-related data3.
FraudGPT emerged days later on July 25, 2023[4] with a slew of other dark versions of legitimate GPT models, including DarkBERT, following in its wake[5]. The exact LLM used to develop FraudGPT is currently unknown, but the author has claimed that it was based on Google’s Bard. The developer of DarkBERT claims to have had access to the LLM of a legitimate tool also called DarkBERT, but was able to tune the bot into a malicious version to be used by nefarious actors4.
The original DarkBERT bot was developed by South Korean researchers who trained it on dark web data with the intent of using the tool to fight cybercrime4. The true author of DarkBERT claims that the actor of the illicit DarkBERT is likely leveraging their concept for promotional motives, because the process of exploiting their tool in the manner alleged by the threat actor would be extremely difficult and nearly impossible4.
Cost and Capabilities
Dark versions of AI that mimic ChatGPT and other generative AI tools can be accessed through underground forums on the dark web or via underground telegram channels. WormGPT, which was discontinued in August 2023, had a monthly “subscription” starting at $100 with an annual usage cost of $500 and a “private setup” service for $5,000. FraudGPT offers a monthly subscription for $90 with yearly access priced at $700, while DarkBERT was spotted offering lifetime privileges for $1,2505.
These tools are advertised as illicit versions of recognized GPT models to be used by fraudsters, hackers, scammers, and spammers. They are specifically designed for malicious activities, touted for launching sophisticated social engineering and BEC attacks2, as well as writing malicious code, creating undetectable malware, and for network reconnaissance like finding leaks and exploitable vulnerabilities within a target’s environment[6].
While such promises have been made by their sellers, upon closer inspection it appears that these tools offer substantially less than advertised. These malevolent “starter kits for cyber attackers” have generally been found to write basic code that needs to be adapted into a functional script. Upon further testing, researchers noted that the tool was often unable to properly run due to formatting issues or incompatibility with the target environment, deeming it not as reliable as originally broadcasted[7]. Additional shortcomings include the likely utilization of older model architectures, the bots’ reliance on outdated information, and the cloudy nature of how the data was trained, causing irrelevant, unreliable, and obsolete intelligence in response to prompts7.
Practical Use Cases
One of the main utilities that could arise from malicious generative AI tools is their use in sophisticated phishing and BEC attacks by crafting highly automated fake emails that can be personalized for targeted recipients2. The ability to create an email with exemplary grammar provides an advantage to attackers lacking fluency in particular languages, rendering malicious emails created using these generative AI tools more difficult to spot3.
The use of malicious AI models is most aligned to help unsophisticated cybercriminals conduct targeted cyberattacks with the power of AI at a scale they wouldn’t otherwise be capable of2. This ultimately grants a wide spectrum of cybercriminals more advanced capabilities, therefore increasing the potential for a greater volume of cyberattacks as malicious AI models continue to evolve2.
© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice
[1] https://cybernews.com/security/chatgpt-badboy-brothers-dark-web/
[2] https://thehackernews.com/2023/07/wormgpt-new-ai-tool-allows.html
[3] https://www.csoonline.com/article/646441/wormgpt-a-generative-ai-tool-to-compromise-business-emails.html
[4] https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/#google_vignette
[5] https://cybernews.com/security/chatgpt-badboy-brothers-dark-web/
[6] https://thehackernews.com/2023/07/new-ai-tool-fraudgpt-emerges-tailored.html
[7] https://thehackernews.com/2023/10/exploring-realm-of-malicious-generative.html