Email Security

    AI and Cybersecurity: The Need for a New Mindset
     

    As the use of advanced AI in cyberattacks increases, cybersecurity organizations must get more creative to outsmart the bad guys.
     

    by Stephanie Overby
    1218764252.jpg

    Key Points

    • Cyber adversaries are increasingly adept and creative in the application of AI to fuel their attacks.
    • Incorporating new AI capabilities into cybersecurity operations and threat intelligence is necessary but not sufficient.
    • Cybersecurity organizations must think outside the box to fully embrace intelligent capabilities, acquire new skills and roles, and creatively deploy machine learning models and other capabilities to fortify their organizations for the AI age.

     

    "Nobody ever got fired for buying IBM,” the old IT saw goes. Buying and implementing what is comfortable and seemingly reliable often feels the safest bet. Indeed, many cybersecurity professionals today tend to revert to tried-and-true tools and approaches.

    Yet the cyberthreat environment is advancing so rapidly that what worked to protect an organization’s digital assets yesterday may no longer be effective today. Implementing known cybersecurity approaches and responding to alerts is “neither thoughtful nor creative,” says Dr. Herbert Roitblat, Principal Data Scientist for Mimecast and a recognized AI expert.

    What’s worse: It’s predictable. At a time when cyberattackers are beginning to take advantage of AI and machine learning to supercharge their efforts, repeatability is the enemy. If your defense is too predictable, the bad guys with good AI will keep winning.

    And guess what? A cybersecurity leader could get fired for overseeing a failed response to a major cybersecurity incident. What’s needed is a new cybersecurity mindset for the AI age.

    A Mandate to Innovate with Cybersecurity AI

    As we mentioned in an earlier post, cybercriminals are using AI to boost ransomware, email phishing scams and other attacks. Studies predict that the impact of cyberattacks may hit $6 trillion by 2021.[1] Organizations cannot properly protect themselves without the aid of intelligent automation. “While many cyberattackers are throwing the same old stuff out there to see where they can find a hole, others are becoming incredibly creative in getting around cyber defenses,” says Roitblat. “And you must have automation in place to keep up with them.”

    “In a rapidly transforming threat landscape, cyber defense solutions must be both innovative and flexible to harden organizational security against ever-evolving adversarial attacks,” Security magazine wrote earlier this year. “While current signature detection techniques effectively combat known attack structures, they are inherently reactive and require significant time to respond to sophisticated attacks.”[2]

    The old stand-by approaches can’t match the speed and intelligence of an AI-fueled cyber assault. “Traditional defenses that rely on prior assumptions will be outmatched against supercharged AI attacks. Organizations are aware of the need for speed-to-response; however, we found that they are slow to respond when they’re triaging an incident,” according to a February 2020 report from Forrester.[3]

    That need for speed is one reason why the value of the AI-focused cybersecurity market is estimated at $9 billion growing to $38 billion over the next six years.[4] As new threats emerge every day, cybersecurity leaders must embrace new solutions.

    However, there’s a difference between deploying AI in cybersecurity and deploying it effectively. After all, AI itself is vulnerable to adversarial attacks. And, as the Security article points out, the “wide demand for AI exceeds most organization’s ability to develop and operationalize AI-based solutions.”

    To use AI effectively, cybersecurity leaders must quickly move through an organization’s fear or doubt surrounding AI capabilities, get serious about acquiring more analyst and data science talent, and — most importantly — get more innovative in their application of AI in cybersecurity.

    Getting Past Fear and Doubt

    Fears and doubts about the deployment of AI capabilities are persistent. At one end of the spectrum, there are Terminator-era worries of unruly robots rising up. At the other, there are concerns about the reliability of AI. Neither anxiety is particularly well-founded.

    Rumors of the arrival of our AI overlords have been greatly exaggerated. “A lot of the fear comes from misunderstanding what the technology is and what it can do,” cybersecurity expert Mikko Hypponen wrote in VentureBeat this year. “For example, we’re decades away from seeing anything like artificial general intelligence — a machine or system that can learn to do any task a human can — let alone a sentient AI.”[5] Instead, machine learning in cybersecurity is a complement to human talent, taking on tasks that humans can’t do alone.

    When it comes to efficacy, there are a number of AI-enabled cybersecurity tools that are already effective nearly up to 100%, says Mimecast’s Roitblat. Gmail, for example, blocks more than 99% of spam emails from reaching users’ inboxes with the help of machine learning.[6] “There’s still a lot of distrust [about AI],” Roitblat says. “But what people may not realize is that the alternative approaches are even less accurate.”

    In narrow applications (spam filters, antivirus tools, intrusion detection solutions) “computers are already a million times better than humans,” Hypponen said. “And while people versus machine comparisons carry a certain amount of drama, interactions between the two are actually business as usual in many domains, including cybersecurity… And these AI-based defenses win more fights than they lose.” Encouraging human cybersecurity professionals to embrace machine intelligence as an augmentation of their own skills is step one in applying AI to cyber defense more effectively.

    Getting the Right Talent

    The shortage of cybersecurity professionals is well documented. Two-thirds of security professionals say the cybersecurity skills gap has led to an increased workload for existing staff, according to a survey conducted by IT research and advisory firm ESG.[7] Defensive AI can help to close that gap with the application of a variety of analytics techniques.

    But in order to integrate AI-enhanced tools and processes into security operations and threat intelligence, cybersecurity organizations must invest in AI-savvy professionals. “There is a lack of analysts,” Roitblat says.

    The situation may be poised to get worse, as dependencies among the people, hardware, and software in AI cybersecurity defenses grow more complex. Cybersecurity leaders increasingly will need to hire or retrain for new skillsets and welcome new members to the cybersecurity team, such as AI engineers who can harness intelligent agents for cyber defense and threat intelligence applications, and machine learning experts to oversee supervised and unsupervised learning, hands-on modeling, and more.

    Getting Creative

    As Security magazine rightly points out: “Commoditized defenses will only stop commoditized attackers, not persistent attacks commonly seen with nation-states or other sophisticated adversaries.”[8]

    When it comes to AI, the path of least resistance may be using the most popular AI models, but those are also the most easily subverted. “There is a lack of imagination on the part of a good number of cybersecurity professionals: pick up a package, deploy it, and you’re done. It’s not very thoughtful and it’s very predictable and that’s exactly what the problem is,” says Roitblat. “They think if they use this well-known model and it works — well nobody’s going to get fired for that.”

    The problem with widely used models is that adversaries also know them well. “But you don’t have to use those canned models; there are lots more available,” says Roitblat. “If you just go back a generation or two, there are lots of machine learning models that are just as good but less exploited. It may not be as ‘cool’ as the hot, new model. But it doesn’t do you any good to have the most fashionable model in the world if everyone is using it. It’s like having a lock with a widely available key. You want something that requires a different key.”

    The Bottom Line

    As the cyberthreat environment advances exponentially, fueled by the availability of AI capabilities, commoditized security approaches are bound to fail. Today’s cybersecurity organizations must think and work in new ways to creatively apply their own AI models and stay ahead of adversaries.

     

    [1]Global Cybercrime Damages Predicted To Reach $6 Trillion Annually By 2021,” Cybersecurity Ventures

    [2]The 5-Step Guide to Making AI Work within your Cybersecurity Strategy,” Security magazine

    [3]The Emergence Of Offensive AI: How Companies Are Protecting Themselves Against Malicious Applications Of AI,” Forrester

    [4]Artificial Intelligence in Cybersecurity Market by Offering (Hardware, Software, and Service), Deployment Type, Security Type, Technology (ML, NLP, and Context-Aware), Application (IAM, DLP, and UTM), End User, and Geography- Global Forecast to 2026,” MarketsandMarkets

    [5]AI can be an ally in cybersecurity,” VentureBeat

    [6]Did AI kill off spam and we just didn’t notice?,” Engadget

    [7]Is the Cybersecurity Skills Shortage Getting Worse?,” ESG

    [8] The 5-Step Guide to Making AI Work within your Cybersecurity Strategy,” Security magazine

     

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top