Email Security

    Bad Guys with Good Algorithms: 5 Ways Cybercriminals Can Exploit AI

    The bad news is cybercriminals are leveraging AI to supercharge their attacks. Learn how they do it in order to build better cybersecurity defenses.

    by Stephanie Overby

    Key Points

    • AI is emerging as a driver of new, more advanced cybersecurity threats.
    • Bad actors can harness AI to deliver more intelligent, adaptable malware, clone data for smarter business email compromise or deep fake their way past biometric controls.
    • Understanding how cyber enemies might use AI to supercharge their efforts will enable cybersecurity organizations to build a case for their own AI-enabled cyber defenses.

    Even as cybersecurity professionals increasingly deploy artificial intelligence (AI) to enhance their cyber resilience, cybercriminals appear to be way ahead of them — using AI as a “force multiplier” to accelerate and supercharge their attacks. As Forrester Research put it: “This new era of offensive AI leverages various forms of machine learning to supercharge cyberattacks, resulting in unpredictable, contextualized, speedier and stealthier assaults that can cripple unprotected organizations.”[1]

    Machine learning can make malware more intelligent, more adaptable and harder to detect. Attackers can leverage AI to take business email compromise (BEC) to a whole new level by cloning audio, video or even word choices to fortify their efforts to manipulate human targets. Even biometric solutions — considered among “the most powerful weapons for fighting fraudsters in payment cards and the e-commerce industry” — are vulnerable to malicious AI; cyber thieves could exploit neural networks to simulate bio authentication data.[2]

    “It’s hard to know exactly who’s doing what because cyber criminals are not particularly public about their methods,” says Dr. Herbert Roitblat, Principal Data Scientist for Mimecast and a recognized AI expert. “But we know they are using a lot of machine learning and other AI.”

    While cybercriminals may not advertise their AI expertise, there is evidence of it. Following are five of the most likely, effective and dangerous ways hackers can weaponize AI to supercharge BEC, malware, phishing and more.

    Creating Deep Fake Data

    “Security systems are based on data,” explains Mike Elgan in Security Intelligence. “Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.”[3]

    The advanced algorithms behind, for example, deep fake videos that misrepresent politicians in videos — generative adversarial networks (GANs) — can be used to generate a phone call or video from a company’s CEO or CSO or create fingerprints or facial images capable of fooling biometric systems. It works by pitting two neural networks against each other. One neural network (a generator) simulates data to fool another neural network (the discriminator); both get better over time. But the real winner may be “the large amount of fake data produced by the generator that can pass as the real thing,” Elgan explains.

    Last year saw the first verified use of a deep fake exploit for AI-enabled voice fraud. Criminals used commercial AI software to impersonate the voice of an energy company CEO, demanding a manager transfer $243,000.

    These efforts work on words as well. AI can “replicate a user’s writing style, crafting messages that appear highly credible,” according to an article by William Dixon of the World Economic Forum and Nicole Eagan of Darktrace. “Messages written by AI malware will therefore be almost impossible to distinguish from genuine communications.”[4]

    Building Better Malware

    AI-powered malicious apps can increase the speed, adaptability, agility, coordination and even sophistication of attacks on networks and devices. Whereas sophisticated attacks used to require targeted research performed by humans, “in tomorrow’s world, an offensive AI will be able to achieve the same level of sophistication in a fraction of the time, and at many times the scale,” the Dixon-Eagan explains.

    Malware developers could utilize AI to generate hard-to-detect malware variants, monitor behavior of nodes or endpoints to create patterns resembling legitimate network traffic, combine various attack techniques to find the most effective options, automatically adjust malware features or focus based on the environment, alter malware behavior if it encounters a virtual machine or sandbox, learn and share information via multiple nodes or increase the speed of their attacks.[5]

    2020 may be the year we see the first malware using AI-models to evade sandboxes, DarkReading noted recently. "Instead of using rules to determine whether the 'features' and 'processes' indicate the sample is in a sandbox, malware authors will instead use AI, effectively creating malware that can more accurately analyze its environment to determine if it is running in a sandbox, making it more effective at evasion,” according to the DarkReading article.[6]

    Stealth Attacks

    AI can power more effective stealth attacks, as well, enabling malware to easily blend into the background of an organization’s security environment. Using supervised and unsupervised learning, malicious programs can hide within a system, learning how and when to attack or evade defensive measures.

    As explained in CISO Mag: “[AI-enabled malware] automatically learns the computation environment of the organization, patch update lifecycle and preferred communication protocols. The malicious app remains silent for years without detection as hackers wait to strike when the systems are most vulnerable. Hackers then execute the undetectable attacks when no one expects. Hackers can also predefine an application feature as an AI trigger for executing attacks at a specific time.”[7]

    Cracking CAPTCHA Keys — and More

    CAPTCHA is widely used by websites and networks to weed out bots or other software machines seeking unauthorized access. However, computer vision and deep learning can enable hackers to bypass this common backstop.

    In 2014, Google’s machine learning algorithms were able to solve the most distorted text CAPTCHAs 99.8% of the time.[8] In 2017, researchers used machine learning to successfully get past Google’s reCAPTCHA protections with 98% accuracy.

    While the CAPTCHA-defeating capabilities of AI are now well known, machine learning can also be used to perform other repetitive tasks such as password-guessing, brute-forcing and stealing, according to Erik Zouave, an analyst with the Swedish Defense Research Agency FOI, writing in DarkReading. He said some password brute-forcing and password-stealing experiments have had success rates of more than 50% and 90%, respectively.[9]

    AI-Powered Personalization for Email Phishing

    The Dixon-Eagan article anticipates that AI capabilities are being added to the increasingly dangerous Emotet trojan, use of which soared this year in a multitude of ransomware attacks, including on U.S. healthcare systems, according to the FBI.

    In 2019, Emotet authors added a module that steals email information from victims, and later uses that data to contextualize phishing emails at scale. “This means it can automatically insert itself into pre-existing email threads… [giving] the phishing email more context, thereby making it appear more legitimate.” If the cybercriminals were to further leverage natural language processing to learn and replicate the language of the existing emails, they could supercharge the phishing attacks, the article explains. “As the AI arms race continues, we can only expect this circle of innovation to escalate.”

    This is particularly worrisome given the availability of Emotet-as-a-service.

    The Bottom Line

    As AI capabilities continue to advance and become more widely available, their use by cyberattackers is certain to grow in the coming years. “The velocity of new security problems Is growing in ways that are hard to deal with,” says Mimecast’s Roitblat, unless cybersecurity organizations understand the weaponization of AI and are able to counter it with their own AI cyber defenses.


    [1] “The Emergence Of Offensive AI: How Companies Are Protecting Themselves Against Malicious Applications Of AI,” Forrester Consulting for Darktrace

    [2]How AI Is Capable of Defeating Biometric Authentication Systems,” DZone

    [3]AI May Soon Defeat Biometric Security, Even Facial Recognition Software,” SecurityIntelligence

    [4]3 ways AI will change the nature of cyber attacks,” World Economic Forum

    [5]Can Artificial Intelligence Power Future Malware?” ESET

    [6]How AI and Cybersecurity Will Intersect in 2020,” DarkReading

    [7]Artificial Intelligence as Security Solution and Weaponization by Hackers,” CISO Mag

    [8]Why CAPTCHAs have gotten so difficult,” The Verge

    [9]Malicious Use of AI Poses a Real Cybersecurity Threat,” DarkReading

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Haut de la page