Email Security

    Artificial Intelligence and Cybersecurity: Separating Fact from Fiction
     

    With AI-driven threats on the rise, companies need to shore up their security strategies to match. Learn how to use AI to your advantage and avoid AI’s pitfalls. 
     

    by Daniel Argintaru
    60BLOG_1.jpg

    Key Points

    • AI-based cyberattacks have put businesses on edge, with security teams understaffed and in need of more powerful defenses. 
    • AI delivers the greatest value to tackle these threats but should be deployed as part of a complete and integrated security infrastructure.
    • AI also must be combined with the judgment of experienced SOC analysts for the most effective defense.

     

    Artificial intelligence (AI) represents a double-edged sword for cybersecurity professionals. On one hand, it empowers hackers to roll out complex attacks at scale that can overwhelm traditional cyber defenses. On the other, it provides businesses with a formidable defense against widespread and growing cyberthreats. 

    A mid-June Congressional hearing on the state of AI and cybersecurity in the United States highlighted the sense of urgency felt by businesses and governments alike.[1] Attendees from Google, Microsoft, and Georgetown University all came to a telling conclusion: Security leaders are aware that AI-based attacks pose a mounting threat, but they need more practical knowledge and workable solutions to fend them off. 

    In addition, they need to cut through the noise and understand what AI can — and cannot — do for their cyber defenses. Mimecast’s new paper, AI and Cybersecurity: the Promise and the Truth of AI Security, separates fact from fiction and breaks down the promise of AI in cybersecurity. This article discusses the main themes from the paper, which combines industrywide research with expertise from Mimecast’s own AI experts. 

    AI and Cybersecurity: The Benefits and Pitfalls 

    With security teams short-staffed and facing more complex attacks from all angles, it is understandable why many businesses view AI as a lifeline that will level the playing field for their security operations center (SOC) analysts. But despite what many security providers say, AI is not a panacea for modern cybersecurity challenges. Yes, AI is essential to a modern cyber defense strategy, but it is a tool like any other, coming with distinct benefits and challenges alike.

    Let’s start with the benefits: 

    • AI processes enormous volumes of data. And it does so at a scale that would be unimaginable for a human mind.
    • AI algorithms work quickly. Not only can they process huge volumes of data, they can do so in near real time, helping security teams stay on top of large and evolving threats.
    • AI gets “smarter” over time. AI is designed to get better at spotting and managing cyberthreats as it collects more data, which means your business is better protected each day. 

    In short, AI can help companies manage large, complex threats at scale and make their cyber defenses increasingly effective. It’s worth noting that most AI applications still fall into the category of machine learning (ML), a subset of AI in which algorithms learn from past data and improve automatically, without explicit programming. However, true AI — whereby software can learn and address complex problems as nimbly as the human mind — is still years away.

    This brings us to the limitations of AI for cybersecurity: 

    • AI algorithms are only as good as the data that feeds them. They require huge volumes of high-quality information to work well — more than most companies collect on their own and higher quality than most companies can muster. 
    • AI algorithms can generate many false positives. This means they flag emails or websites as dangerous even when they don’t pose a genuine threat. Not only does this waste time for security teams, it means important messages may never reach their destinations.
    • AI lacks transparency. AI security vendors rarely reveal how their technologies work. They expect businesses to trust that their algorithms are robust and tailored to their specific needs.
    • AI can be “poisoned”. Hackers have begun to compromise the data companies use to train their AI algorithms, rendering them ineffective.[2]
    • AI models can be reverse-engineered. Motivated hackers can mimic a company’s AI algorithms to uncover a way through them.

    AI Cybersecurity Best Practices

    Security teams today must protect an immense attack surface, especially with business employees sending more email than ever and communicating remotely via cloud-based collaboration tools like Microsoft Teams and Slack. AI-based defenses are instrumental to defending all of this data. But given their limitations, it’s helpful to keep the following best practices in mind: 

    1. Implement AI as part of a multilayered defense strategy. That means using AI in security tools that can take advantage of its strengths to tackle general threats at speed and scale, and backstopping those combined solutions with advanced tools that more precisely defend against targeted attacks. 
    2. Don’t use AI blindly. Rather, use it when it can deliver a measurable advantage over a simpler solution. For instance, AI is excellent at identifying and responding to common threats at scale. 
    3. Fuel your AI algorithms with sufficient, high-quality data. It takes billions of data points to properly train an AI algorithm, and the learning never ends.  
    4. Choose AI when speed is of the essence. For example, AI can help you to detect intrusions in your network quickly, before a bad actor has a chance to exfiltrate your company’s data. 
    5. Combine AI with real human brainpower. Advanced algorithms can take pressure off of your security team and make them more productive, but nothing can replace the judgment of an experienced SOC analyst.

    This final point is crucial because it’s precisely in the gray area between obvious threats and the unknown that opportunities are found or lost, and it takes more than AI to make the distinction. It takes the power of proven security technologies and years of expertise.

    The Bottom Line

    While we’re nowhere near the point of fully automated cyber battles between AI hackers and AI security solutions, AI-based attacks do pose a significant and immediate threat to businesses everywhere. With the average cost of a data breach hitting $4.24 million in 2021, the stakes are too high to let widespread threats go unchecked. [3]

    To tackle this challenge, companies first need to gain a realistic picture of where they stand. Read Mimecast’s paper, AI and Cybersecurity: the Promise and the Truth of AI Security, to learn more about the current threat landscape and how Mimecast’s AI-based defenses, as part of a comprehensive security strategy, can help protect your employees and your data.


     

    [1] “Congressional hearings focus on AI, machine learning challenges in cybersecurity,” CSO Online

    [2]‘Deep learning is a completely terrible idea for security,’ says cybersecurity expert,” Fortune

    [3]Cost of a Data Breach 2021, IBM

     

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top