Email Security

    How ChatGPT is changing the game for hackers and CISOs

    The hugely popular ChatGPT is already being used for malware and phishing emails. Yet some experts claim it could make the world more cybersecure.

    by Vinh Nguyen
    72BLOG_1.jpg

    ChatGPT has had quite the launch. Released in November 2022, it reached a million users in just five days. Adopters have marvelled at the chatbot’s research and writing skills.

    Google “freaked out” over its ability to outmuscle its flagship search engine, professors acknowledged that it would pass Wharton’s prestigious MBA course and fans sent its lyrics to singer Nick Cave.

    Not everyone is impressed (Cave said the lyrics were a “grotesque mockery of what it is to be human”), but ChatGPT is an AI blockbuster with serious implications for cybersecurity. Phishing campaigns and malware could be transformed, with Australian Computer Society Cyber Security Committee chair Louay Ghashash describing it as a potential “nuclear weapon of cyber warfare”. But ChatGPT shouldn’t be judged merely by the threat it poses: the platform can also help cybersecurity professionals improve their knowledge and scale up their operations.

    ChatGPT is a huge leap forward, but it’s far from perfect 

    ChatGPT (Generative Pre-trained Transformer) is built from OpenAI’s GPT-3 language model family, fine-tuned with supervised and reinforcement learning. Like OpenAI’s image generator, DALL-E, it responds to a text prompt from a user. The application is in feedback mode, and is currently free, though its runaway success means access is currently limited, with new users being funnelled via a waiting list.

    The chatbot has still in its early stages and still has some big issues to solve. It’s currently largely English-only, it’s better at writing short responses than extended answers, it struggles to supply sources for the material it uses and its accuracy cannot be relied on. But ChatGPT is better than anything we’ve seen before; it’s able to craft coherent answers to complex questions, and even suggest workable code in response to programming questions. And hackers are already looking for ways to exploit its abilities. 

    ChatGPT has protections – but criminals are already working around them 

    ChatGPT isn’t a free-for-all when it comes to cyber risk. If you ask it to write a phishing email or malware its default response is to refuse. Access to it is dependent on your IP address, payment cards and phone numbers, and it’s currently unavailable in some countries, including Russia and China. However, hackers and journalists have already found ways to bypass its protections, with some boasting of their exploits on dark-web forums. And even with its restrictions, ChatGPT can easily be used, for instance, to craft a large number of personalised and apparently innocent emails that can be used to carry malware.

    Why ChatGPT could be a phishing game changer 

    Worryingly, the US Center for Security and Emerging Technology notes that AI that fuses private data with convincing writing can “combine the scale of… spray-and-pray phishing campaigns with the effectiveness of spear phishing”.

    By allowing customisation at scale, ChatGPT could be a phishing game changer that helps criminals convince even wary targets to click on dangerous links or give away their credentials on spoofed websites. It’s ideal for hackers whose limited English might otherwise give them away in phishing attacks. Used alongside deep-learning models such as Codex, meanwhile, it could produce even more sophisticated messaging, such as near-human dialogue and speech that makes instant messages or deepfaked videos seem authentic. 

    Criminals can use ChatGPT to produce malware 

    ChatGPT’s ability to provide working code and instructions as part of its response is one of its most impressive features. Cybersecurity experts have already observed criminals showing off infostealers, image-based targeting and encryption tools allegedly built using ChatGPT. The platform’s anti-malware restrictions appear – for the moment at least – to be bypassable by rewriting the prompt and adding additional constraints.

    Meanwhile, security researchers have used the platform to create polymorphic malware that could “easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary”. While ChatGPT’s ability to create sophisticated malware such as ransomware appears limited, it offers a leg-up for less experienced criminals – one user shared Python encryption code produced with the chatbot that they claimed was the first script they’d ever developed.

    More advanced groups are sure to find uses for the new technology, especially if they develop their own AI models. OpenAI’s tool could be used to build fake websites and bots, or to make dynamic changes to code, therefore evading antivirus checks. Similar AI tools could also be used to scan for vulnerabilities. 

    But ChatGPT also brings good news for CISOs 

    At this point, it might sound like ChatGPT is a cybersecurity nightmare. But many experts see light in the darkness. ChatGPT can be a major learning tool for security professionals, demystifying terminology for new staff or less technically-minded colleagues. It can offer solutions and explanations for pen testers, blue teams and developers. It can help make code more legible, and help teams investigate and reverse engineer malware or find potential exploits. Analysts can use it to check their own findings, scale their efforts and generate reports that are easier to understand. Indeed, US security expert Kyle Hanslovan says he believes that overall ChatGPT gives defenders “a little bit better of an upper hand” than attackers in the battle for cybersecurity.

    What’s certain, though, is that better controls are needed. Criminals have already worked their way around ChatGPT’s restrictions, and new controls from OpenAI and new legislation may be required if this emerging landscape is to be effectively regulated, particularly the tool and its rivals grow more advanced.

    ChatGPT is a threat and an opportunity for CISOs 

    Criminals are always looking for new ways to attack. ChatGPT looks likely to allow cyberattackers to mount campaigns that are larger, more persuasive and harder to identify. Used right, it can help security professionals too, by facilitating learning and code analysis. These are early days, but ChatGPT looks set to be both an opportunity and a challenge for CISOs. Whatever you do, don’t ignore it. The robots aren’t on their way: they’re already here. 

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top