Generative AI Threats
ChatGPT safeguards can prevent misuse, but jailbreaking tools can give bad actors the upper hand in bypassing those safeguards to create more believable phishing emails
Key Points
- Large language model tools like ChatGPT can be used to generate convincing phishing emails at high volume.
- These convincing emails can be more difficult to spot than traditional phishing emails that were littered with spelling and grammatical errors.
- ChatGPT and similar tools have safeguards in place to stop misuse, but jailbreaking tools can bypass these safeguards.
Tools based on large language models, such as ChatGPT, can be used to generate convincing, well-worded, grammatically correct phishing emails at high volume. They can also be used to perform research on targets, both people and organizations.
Business email compromise (BEC) attacks often involve a conversation between the attacker and the victim before the attacker achieves their objectives (receiving a wire transfer, sensitive data, etc.). As well as being used to craft the initial message, these services can also help attackers create relevant and accurate replies to their victims’ responses.
ChatGPT Safeguards and Jailbreaking
ChatGPT does have safeguards and restrictions in place to try to prevent misuse and the creation of harmful content. However, there are various websites and forums that share ‘jailbreaks’ for ChatGPT. These jailbreaks are specific prompts that enable the bypass of the protections and will make ChatGPT follow any command input by the user. This could include the creation of content that reveals sensitive information or is malicious, such as the generation of malicious code and phishing emails.
Four ChatGPT Variants Used by Hackers
Jailbreaking ChatGPT is not necessarily required for cybercriminals to abuse this technology. There are now many variants for sale on the dark web that essentially behave like an unrestricted version of the tool and are supposedly fine-tuned for malicious purposes. They typically cost several hundred dollars per year, with monthly subscriptions also available:
- WormGPT is a ChatGPT-style tool based on the GPTJ language model that helps attackers craft malicious code and phishing emails. Reportedly trained on vast amounts of data, specifically including malware-related data.
- XXXGPT is another variant that comes with an expert support package. XXXGPT is designed to help attackers develop code for botnets, trojans, key loggers, infostealers, and other types of malware.
- WolfGPT is a tool built in Python with a focus on confidentiality. WolfGPT’s areas of speciality include cryptographic malware creation (used for crypto mining) and advanced phishing attacks.
- FraudGPT can create bank-themed phishing emails, where the user simply needs to input the target bank’s name and insert phishing links into the placeholders in the output content. As well as email generation, it can also code login pages designed to steal credentials. The creators claim it can create malicious code, build undetectable malware, find vulnerabilities and identify targets. It is sold on the dark web and telegram, with updates released every few weeks.
The Bottom Line
Organizations need to be aware that tools like ChatGPT are going to be a part of their cybersecurity landscape for the foreseeable future. Employees will be using the tools, but so will attackers. These bad actors will try hard to stay ahead of the safeguards that might stop them from using large language model tools to create fine-tuned phishing emails and malicious web pages that will no longer contain the tell-tale signs of spelling errors and poor grammar.
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!