How Attackers Use Machine Learning to Create Sophisticated Attacks

    ML models are at the forefront of both cybersecurity and creating sophisticated attacks – sometimes it’s ML model vs. ML model.

    by Jose Lopez
    16BLOG_1.jpg

    Key Points

    • Cybersecurity vendors have been relying on machine learning for some time now and their models have become relatively sophisticated at stopping threats.
    • Cyberattackers have also turned to machine learning, creating their own ML models that assist in generating phishing emails and exploiting cybersecurity weaknesses.
    • With machine learning being used by bad actors, organizations need to ensure their own ML models are up for the fight.

    Machine Learning and Security

    Machine learning (ML) can be used for finding and exploiting flaws in the security of any kind of detection system that is based on rules, signatures, detection of patterns, or any other trace protocol. ML models can be trained to conduct these searches from different angles and scales from tiny to huge, but always in a systematic, tireless way.

    Combining Two Strategies

    One of the most powerful aspects of ML is that models can be taught using creative techniques. The models can even learn from practicing those techniques. Techniques like these will forge a complex solution composed of tiny actions (or smaller parts of possible solutions) and learn through the process. For example, an autonomous car that moves its own steering wheel and conducts other driving actions can learn to drive in a simulator. This technique is called the Reinforcement Learning strategy.

    Another popular strategy in ML is called Adversarial Machine Learning. In this strategy, a system called a “generator” attempts to achieve a goal while another system called a “discriminator” attempts to obstruct the goal of the first system, usually detecting the actions of the generator.

    These two strategies alone provide bad actors with two powerful tools to use in developing attacks. When combined, however, the two strategies are even more powerful. In this hybrid strategy, the Reinforcement Learning part of the system generates a complex attack using smaller simpler attacks and the Adversarial Machine Learning part of the system uses feedback to improve itself to detect the victim’s security vulnerabilities. 

    Seeing the Victim as a Black Box

    For many organizations, a good defense against these strategies is built using a multi-layered approach. The idea behind this approach is aptly called “the Swiss cheese model” because while Swiss cheese does have holes, it’s multiple layers can ensure there never is one continuous pathway through a block of Swiss cheese.

    Security professionals deploy multiple layers of security tools and hope that at the end of the day while some attacks will get through some of the holes, the many layers will combine to ensure that no attacks fully ever get through their entire block of security tools.

    The problem with this approach, however, is that today’s sophisticated cybercriminals create sophisticated attacks that are designed to not even attempt to go through the layers of Swiss cheese, but instead, avoid the block of cheese altogether. These attack tools are designed not to attack a specific security later, but to treat the victim like a black box – an unknown set of detection tools that they need to bypass.

    For example, an amateur attacker will generate thousands of phishing emails using different from addresses, subject lines, and email content. They will send these thousands of emails to recipients throughout the victim’s organization all at once. These types of brute force attacks are usually easily detected and shut down by a layered security approach.

    A sophisticated, attacker, however, may generate many phishing emails, but then select only the ones with the very best chance of succeeding. They will properly time sending these emails and send them to just the persons within the organization that they feel offer them the best success rate. This is a much more subtle attack method and can circumvent the layered security approach.

    Feedback (or a Lack of Feedback)

    In addition, with just a small amount of phishing emails, the attacker can take a look at what works and what doesn’t, and then use that feedback to craft even better malicious emails to send. This process can be repeated until the attacker is caught, allowing them to continually improve the phishing emails they are sending to a particular victim. In this process, feedback is of paramount importance. 

    Attackers consider feedback to be an active piece of information, but they also consider a lack of feedback to be feedback itself. 

    A lack of feedback is important to bad actors and can sometimes even be the most valuable piece of information for them. With a lack of feedback, they know an attack was stopped and can adjust their tactics accordingly. With patience, it is only a matter of time before they can find tactics that work. 

    Indirect Feedback

    In addition to feedback and a lack of feedback, there is also “indirect feedback”, which is something more benign like mousing over a link in the email. The recipient may have been suspect, scrolled over the link to see where it would take them, and then decided it was a malicious link and did not click it after all. This is valuable indirect feedback. It lets the attacker know the victim’s team members are on the lookout for malicious URLs. The attacker can then generate a new batch of emails that take into consideration this new knowledge, deploying URLs in the emails that are disguised to look more like legitimate URLs.

    Automated Systems

    Now, imagine how quickly the generation of these malicious emails can be sped up if the attacker is using an automated system to analyze all three types of feedback and generate new and better malicious emails. An adversarial ML system like this can be based on an adversarial neural network, and can be a very effective tool for attackers.

    What Can Organizations Do?

    Organizations looking to defend against these types of sophisticated attacks can best defend them by deploying ML systems of their own. While it may take time to fine tune a defense, a scenario where an organization’s discriminator system beats the bad guys’ generator system every time is possible. To reach that point, the discriminator system needs to be very well trained.

    Organizations can (and should) train their ML models not only with known previous attacks (emails labeled as dangerous) but can create their adversarial system to train their models and improve them.

    The Bottom Line

    ML has been an important part of cybersecurity for some time now and will continue to be just as imperative moving forward. There are many challenges to consider when developing an ML model, but the payoff is definitely worth the time and effort needed to overcome those challenges.

    ML is a big part of how Mimecast keeps its customers safe from email-born attacks. Learn more or start a free trial at Mimecast.com.

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Haut de la page