Rise of the ROSE Bots
Cybercriminal gangs are supercharging their social engineering attacks with AI, then superspreading their innovations across the underworld by selling AI-as-a-service.
- Artificial intelligence and AI-as-a-service (AIaaS) are empowering cybercriminals with new capabilities to massively scale personalized spear phishing emails that can be more effective than those created by humans.
- This is leading to an “AI arms race” between attackers and defenders.
- It is critical for employees to be made aware of these threats and take appropriate measures to avoid becoming victimized.
The transition to working from home brought about a massive increase in the numbers of remote online social engineering (ROSE) attacks by cybercriminals. Innovations such as artificial intelligence (AI) and AI-as-a-service are expected to continue driving this growth.
AI and machine learning technologies enable improvements in automation and targeting capabilities, reducing the time and effort cybercriminals need to devote to each attack, while increasing their success rate. Once perfected, cyber gangs tend to monetize their latest exploits by converting them into “as a service” business models, as they’ve done with “ransomware-as-a-service”. These software packages are then sold to many more, unskilled cybercriminals. All of which could lead to more effective, personalized ROSE attacks at massive scales.
“Offensive AI methods will likely be taken up as tools of the trade for powering and scaling cyberattacks,” Microsoft Chief Scientific Officer Eric Horvitz told the Senate Armed Services Subcommittee on Cybersecurity in May. “We expect that uses of AI in cyberattacks will start with sophisticated actors but will rapidly expand to the broader ecosystem via increasing levels of cooperation and commercialization of their tools,” he said in testimony at a hearing devoted to AI and cybersecurity.
In parallel, security vendors such as Mimecast are also using AI to improve cyber defenses.
Social Engineering: A Cycle of Personal Attack
Sophisticated social engineering attacks typically follow a similar cycle of first gathering information, then establishing a connection and finally exploiting that connection and executing the attack. In the information gathering phase, attackers need to identify a target who will allow them to accomplish their goal (transferring account funds, accessing a system, providing desired information), select an attack vector (email, phone, text) and plan their pretext (impersonating a bank, tech support, colleague or manager). This information will then be used to establish a connection in the next phase.
Establishing the connection is often the most challenging phase of a ROSE attack, particularly with email-based attacks. In a phishing attack, the social engineer needs to have identified a message that will resonate with the target, while appearing to be legitimate. For example, an email claiming to be from a bank should appear to come from a bank that the target does business with, and it should look similar to other emails coming from that bank. If it doesn’t, the connection won’t be established.
Even if the message is relevant and appears legitimate, that does not guarantee success. To be successful, the attacker needs to convince the target to engage with the email quickly and without critically evaluating the attempt. Otherwise, if the target stops to think, the odds of success rapidly fall.
This need for speed is the reason many phishing emails warn of an impending catastrophe such as an account lockout or shutdown. Urgency strengthens the message’s “hook”, or call to action, exploiting the target’s fear response and compelling them to act. In the case of phishing emails, this hook needs to resonate with the recipient in milliseconds, meaning that it is critical for the attacker to have made a correct assessment while gathering information and establishing a connection.
Supercharging Social Engineering
It takes time and effort to succeed in accomplishing each of the phases described above, one individual attack at a time. All of which has imposed limits on the number of attacks that can be generated…until now. Recent advances in language processing and generation are rapidly enhancing attackers’ capabilities. The same AI technologies that allow legitimate organizations to employ chatbots to efficiently answer customer questions may also be used by malicious actors to generate spear phishing attacks against thousands of employees using just a few lines of code.
Creating effective ROSE attacks can be a complicated process but automated tools, some using AI, are reinforcing each phase of the attack cycle. Take the information gathering phase, during which the attacker must identify potential messaging that will resonate with the target. One example of an effective approach to this task, developed by a researcher, is the Social Engineering eXposure Index (SEXI). The SEXI uses a combination of demographics, working environment, experiences, and other open source personal information to generate an index of a target’s vulnerabilities. The SEXI represents a precursor to the type of automated toolkits we may observe in the near future.
While this approach offers some ability to assess an individual’s potential vulnerability, it does not necessarily provide the information needed to create a message that will resonate with the target. To address this second aspect of information gathering, a different group of researchers chose another approach and instead created a software bot that followed a target’s Twitter posts, using them in a machine learning model to generate a spear phishing attack based on the target’s topics of interest.
A more advanced example leveraged the GPT-3 (Generative Pre-trained Transformer-3rd generation) “davinci-instruct” machine learning model to generate spear phishing emails that, in some instances, were more effective in terms of click rates than messages created by human authors. Beyond the obvious implications of this use case, the researchers suggested that AIaaS may disrupt the current state of security by greatly reducing the barriers to entry for threat actors to utilize such an approach to launch attacks.
In fact, this model was showcased at Black Hat USA as a prototype end-to-end service. The researchers used a semi-automated version of Humantic AI (used by HR departments, e.g., for assessing job candidates) to build a profile of the target using open source information combined with the GPT-3 model to develop personalized spear phishing emails.
AI Comes to the Defense
Realizing that attackers are beginning to use AI technologies to improve their tactics means that organizations’ security teams should also be looking for ways to incorporate AI tools to keep users safe. For example, security vendors such as Mimecast increasingly use AI and machine learning to detect and alert users to anomalous behaviors that could indicate malicious emails. As attacks are confirmed, that information also feeds into machine learning models analyzing and responding to crowdsourced intelligence.
In addition to new AI innovations and more conventional security solutions such as email scanning, it will be increasingly important for organizations to ensure that their employees are aware of the latest threats with up-to-date training. Users need to understand that these emerging technologies can leverage openly available information on social media to generate highly sophisticated and personalized attacks.
The Bottom Line
“We must prepare ourselves for adversaries who will exploit AI methods to increase the coverage of attacks, the speed of attacks. and the likelihood of successful outcomes,” the Senate Cybersecurity Subcommittee was told in May. As cybercriminals supercharge social engineering with AI and then superspread the capabilities by selling AI-as-a-service on the Dark Web, organizations’ security teams are finding themselves in an AI arms race. Find out how Mimecast deploys AI for cyber defense.
 “Artificial Intelligence and Cybersecurity: Rising Challenges and Promising Directions,” Testimony before the U.S. Senate Armed Services Subcommittee on Cybersecurity
Kennesaw State University
 “Generative models for spear phishing posts on social media,” Cornell University
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!