Email Security

    AI vs. AI: Now, AI is Required for Your Business’ Cyber Resilience
     

    Cybercriminals are using AI to boost ransomware, email phishing scams and other attacks. cybersecurity leaders must also deploy AI to enhance their cyber resilience.
     

    by Stephanie Overby
    900-getty-woman-in-dark-conf-room.jpg

    Key Points

    • A range of bad actors are incorporating AI capabilities into their development of technology for attacks in cyberspace.
    • Traditional approaches to cyber defense are inadequate to fend off the variety, volume, and velocity of threats enabled by AI.
    • Security organizations must incorporate AI-enabled capabilities into their cyber strategies to keep pace with their similarly armed adversaries.

     

    Artificial intelligence (AI) and machine learning are at the top of the agenda in many board rooms and executive suites around the world — whether we’re talking about executives of legitimate businesses or global criminal organizations. Malicious cyber attackers are harnessing advanced intelligent capabilities to supercharge their ever-more effective ransomware, business email compromise, brand exploitation and other nefarious efforts.

    “It’s hard to say specifically who exactly is doing what because bad actors are not particularly public about their strategies, but it’s apparent they’re using a lot of machine learning and other AI,” says Dr. Herbert Roitblat, Principal Data Scientist for Mimecast. “Everyone is these days.”

    AI in Criminals’ Hands Threatens Organizations’ Cyber Resilience

    Nefarious applications of AI are fueling increasing velocity, volume and variety of attacks on modern businesses, rendering many traditional methods of mitigating cybersecurity threats inadequate. Deep fake scams, for example — which Forrester predicts will cost organizations in excess of $250 million in 2020 — employ AI to create convincing audio and video to fool or coerce users.[1] Deep fake technology is already being used in business email compromise attacks, which the FBI has said caused more financial losses than any other type of cyberattack in 2019.

    Roitblat says organizations cannot hire enough analysts — and even if they could, those analysts couldn’t work fast enough — to provide effective cybersecurity protection without the intelligent automation of AI, particularly machine learning. “Organizations can no longer afford to bring knives to gunfights,” Roitblat says. “They just can’t keep up.”

    A 2020 report from the UK’s Royal United Services Institute for Defence and Security Studies highlighted the need to incorporate AI into cyber resilience strategies to “proactively detect and mitigate threats” that “require a speed of response far greater than human decision-making allows.”[2]

    The market for AI cybersecurity technologies is expected to grow at a compound growth rate of 23.6% through 2027, when it will reach $46.3 billion, according to Meticulous Research.[3] The increase in frequency and complexity of cyber threats, growing demand for advanced cybersecurity solutions and the emergence of disruptive digital technologies across industries is driving AI adoption in cybersecurity, the research firm reports. It also notes that ensuring resilient and secure remote access systems during the increased vulnerability of COVID-19 lockdowns is only compounding the need for AI-enabled automation.

    What AI in Cybersecurity Is – And Isn’t

    Gartner called out AI in cybersecurity as one of its top nine security and risk trends for 2020, noting that security organizations will need to address three key challenges: leveraging AI with packaged security products to enhance cybersecurity defense, anticipating the nefarious use of AI by attackers, and protecting AI-powered digital business systems.[4]

    It’s abundantly clear that small and midsize organizations can no longer provide adequate cybersecurity on their own. Indeed, few Fortune 500 enterprises can. They must adopt more automated and intelligent tools provided by strategic partners with AI expertise. While many security professionals believe that incorporating AI-enabled tools and approaches into the cybersecurity arsenal is exponentially more expensive or difficult than conventional approaches, that is not necessarily correct. For example, AI training data may be costly, but in many cases organizations already have the data required to feed into appropriate AI models.

    It’s also important to understand that AI is neither a black box nor a magic bullet. “People have this romantic view of AI,” says Roitblat, a longtime AI expert whose latest book, Algorithms Are Not Enough: Creating General Artificial Intelligence, became available on Oct. 13. Stripped of that fantasy, AI is really just good, advanced engineering. “The reason it works is because some human has figured out how to map a problem to some numbers,” says Roitblat. “It’s not perfect. It can make mistakes. But that’s true of every cybersecurity solution. Locks don’t prevent break-ins, but they do make them more difficult.”

    AI Applications in Cyber Protection

    There are seemingly endless use cases for AI in cybersecurity, but the most common applications involve tasks like recognizing faces, understanding speech, identifying spam or phishing messages, and detecting malware. There are more than 200 different supervised machine learning methods that can be applied to cybersecurity problems. Deep learning (a kind of machine learning modeled after the human brain and capable of undercovering complex patterns in data) is increasingly being deployed for cyber defense as well.

    A few examples of how these technologies are being used include:

    • Deep learning to identify not-safe-for-work and other images.
    • Machine learning models to detect anomalous patterns in email.
    • Finite automata — a very simple pattern-recognition technology — to identify personal information that should be protected under GDPR and other regulations.
    • Supervised machine learning to categorize websites and identify high-risk sites.
    • Unsupervised machine learning to identify near-duplicates when analyzing newly submitted phishing and spam emails.
    • Neural network models to identify SPAM and malware.
    • Understanding network usage patterns.
    • Identifying phishing emails, detecting impersonation and other “human layer” attacks.

    The Bottom Line

    Incorporating machine learning and other AI capabilities into cybersecurity defenses is now an imperative for organizations of all sizes. AI is neither a panacea nor sufficient on its own in safeguarding the organization. The strongest security posture incorporates a variety of analytical tools and approaches. But it has become a necessary implement in the cyber security toolbox if organizations expect to match the capabilities of the wide range of AI-savvy cyber adversaries.

    [1]Predictions 2020: This Time, Cyberattacks Get Personal,” Forrester

    [2]Artificial Intelligence and UK National Security: Policy Considerations,” Royal United Services Institute for Defence and Security Studies

    [3]Artificial Intelligence (AI) in Cybersecurity Market Worth $46.3 billion by 2027- Exclusive Report Covering Pre and Post COVID-19 Market Estimates by Meticulous Research,” GlobeNewswire

    [4]Gartner Top 9 Security and Risk Trends for 2020,” Gartner

     

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top