Generative AI Opens New Front in Cyber Arms Race
Accelerating and improving the development of content and software code, generative AI can be used both for good and bad in the field of cybersecurity.
- Generative AI recently exploded on the scene as a widely accessible tool that can accelerate and improve the development of content and code.
- Cyberattackers are using it too, to amp up their phishing campaigns and ability to exploit gaps in companies’ security defenses.
- This sudden leap forward in machine learning will severely test cybersecurity teams.
The ChatGPT headlines have come fast and furiously, exploring how this popular generative artificial intelligence (AI) chatbot, capable of producing realistic content on demand, will impact our lives. For cybersecurity professionals, the risks and opportunities are particularly profound. The suddenly surging use of generative AI opens a new front in the long-running cybersecurity arms race between attackers and defenders.
Generative AI models are being adopted on both sides of the cyber battlefield. The central question is which side will gain first-mover advantage, with the odds favoring the attackers.
How Generative AI Develops Content and Coding
Generative AI can dramatically scale and accelerate the work of content developers and coders — for good and for bad. I say this from experience, since I became a beta user last year of the Copilot generative AI coding assistant (now generally available). My productivity as a machine learning engineer and Principal Data Scientist at Mimecast is so much higher: I can code in one day what used to take me a week. Forthcoming iterations of these coding tools will be even more powerful.
Until now, much of AI and its subset, machine learning, has involved predictive analytics applying techniques such as pattern recognition on collections of text, code, or images to suggest what might come next. Generative AI, a technique that has been emerging for a few years, goes the next step: It can mine the same data stores to automatically generate the text, code, or images that should come next. And it can be initiated on demand, as a conversational chatbot that answers users’ requests — for example, “write an essay on war that sounds like Hemingway — with full-fledged answers”.
ChatGPT Puts Generative AI in the Spotlight
This next-level AI had its breakout moment in November with the open source release and viral uptake of ChatGPT, a natural language processing chatbot that generates content. In the subsequent months, everything has changed. Generative AI, which had previously been developing under the radar, was suddenly widely accessible and gaining momentum.
Remember, though, that these are still early days. ChatGPT works on only a portion of Internet content, dating up to 2021. However, technologists continue to work on compressing Internet content for the purpose of processing it. Ultimately, developers will be able to train their AI models with the entire Internet.
So far, no direct connections have been made between the technology and any individual cyberattack, according to a new report from a Finnish government agency. “It is difficult to find evidence for such attacks — investigators rarely gain access to attackers’ backend systems where their AI-based logic is likely deployed,” writes Finland’s Transport and Communications Agency (Traficom). However, both attackers and defenders are working with these next-level AI tools, with reports of malicious development efforts on the dark web. University researchers have also uncovered how coding assistants such as Copilot could be “poisoned” into suggesting malicious code.
As generative AI rapidly evolves, two critical aspects of cybersecurity — email security and vulnerability management — provide prime examples of its powers in the hands of attackers and defenders.
Generative AI Can Scale Up Phishing
Phishing has been an intractable problem for years. Email is cyberattackers’ preferred starting point for most exploits, whether the goal is credential theft, financial fraud, malware drops, or ransomware.
Generative AI can level-up the quality and scale of phishing, enabling attackers to socially engineer ever-more convincing emails in real time. Asking an AI chatbot to generate an email in the voice of a particular CEO, for example, could deliver a more realistic request for the withdrawal of funds from an account, drawing on intelligence gathered from across the Internet. If the email recipient responds, the chatbot can automatically handle follow-up questions as well — not with a template, but by generating an in-character response.
The speed, quality, and range of the technology could let cybercriminals increase the number of CEO scams or other business email compromises (BEC) they run simultaneously to unprecedented levels. Because these generative AI tools are easy to use, the skills required would be minimal.
Generative AI Can Help Breach Software Vulnerabilities
Another cybersecurity issue that perennially exposes businesses is internal: that is, a lax approach to updating software and patching its vulnerabilities. Today, it takes significant time and resources for cybercriminals to understand where these gaps may be in a company’s network in order to exploit them. But generative AI can write code based on English language instructions. Using it, attackers can easily reverse-engineer companies’ systems, simply asking the AI model to explain a system’s coding and where it is most vulnerable.
There’s more. Generative AI can help rogue programs learn from and mimic the behavior of systems they’ve compromised, making them more difficult to detect. If a program loiters, in the form of an advanced persistent threat (APT), its new-found autonomy can further increase its stealth. That’s because there would be less communication between the malware and the cybercriminal’s command-and-control center.
Using Generative AI to Defend Against Attacks
The good news is that generative AI can accelerate most any coding task for the good guys, too. I use Copilot every day, with the productivity increases I describe above speeding my development of Mimecast security solutions. How well does generative AI work? I’ve found that it delivers the right answer about 50% of the time. Yet even when it’s wrong, it’s “almost right.” Because of my programming background, I can identify which part of the code is wrong and quickly tweak it. And while I know that the tool is currently only trained on a subset of all known code, I see its range and accuracy increasing from month to month.
Still, this AI vs. AI arms race could be an uphill battle for cyber defenders. “AI will enable completely new attack techniques which will be more challenging to cope with, and which will require the creation of new security solutions," according to the Traficom report. One of these solutions could be an increase in the use of autonomous decision-making mechanisms, to speed detection that today can take days.
Here at Mimecast, we’re working on detection techniques to differentiate between human- and machine-generated phishing emails. As Traficom says, though, “There is no effective solution to counter them yet.”
Businesses May Lag Behind Cyber Gangs
Unfortunately, cybercriminals are in a position to move faster than vendors and other businesses in leveraging generative AI. The reason is mainly cultural.
Cybercriminals are not as risk averse as businesses. Seeing huge profit-making potential in these new tools, they are more likely to pivot to reap the rewards. Legitimate businesses, on the other hand, have a less agile process for allocating time, money, and people to functions like cybersecurity that are viewed as cost centers. As often happens in this field, companies may not rise to the occasion unless and until the risks become more tangible.
Looking Beyond Current Limitations
Meanwhile, billions of dollars are being invested in the many companies working to evolve generative AI beyond the current state of the art, and the use of open source techniques will accelerate this. “We know that many limitations remain,” says OpenAI, the creator of ChatGPT, “and we plan to make regular model updates to improve in such areas.”
The shortcomings of generative AI include:
- Limited Training Data: Though current models have access to billions of data points, they nevertheless train on a limited set of information, not currently linked to the “live” Internet.
- Mistakes: OpenAI acknowledges that ChatGPT may write “plausible sounding but incorrect or nonsensical answers”.
- Intellectual Property (IP) Issues: Lawsuits that could slow progress have already been filed in cases alleging that generative AI has violated IP rights.
- Cost: Developing defenses against this new level of assault requires expensive equipment (graphics processing units, or GPUs) and IT skills that are currently scarce.
The Bottom Line
Generative AI has recently become widely accessible, escalating its use by both cyberattackers and cyber defenders to accelerate and improve their development of content and code. The full implications for cybersecurity teams, expected to be significant, will only come to light in the coming months. Read how Mimecast is already using different types of artificial intelligence in its cybersecurity solutions.
 “GitHub Copilot Is Generally Available to All Developers,” GitHub
 “ChatGPT Made Me Question What It Means to Be a Creative Human,” Vanity Fair
 “The Security Threat of AI-enabled Cyberattacks,” Traficom
 “Trojan Puzzle Attack Trains AI Assistants into Suggesting Malicious Code,” Bleeping Computer
 “ChatGPT: Optimizing Language Models for Dialogue,” OpenAI
 “Security Risks of ChatGPT and Other AI Text Generators,” CyberRisk Alliance
 “A tracker of generative AI-related lawsuits” Emerging Tech Brew
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!