Email Security

    Understanding and Mitigating the Risk of AI Bias in Cybersecurity

    As more cybersecurity teams rely on AI to help mitigate cyber risk, AI bias is a growing concern. But it can be managed.

    by Stephanie Overby
    gettyimages-1263025554.png

    Key Points

    • Bias is a concern with the creation and use of any artificial intelligence (AI) application.
    • As cybersecurity organizations and vendors incorporate more AI into their defenses, they must be vigilant about limiting AI bias.
    • Humans can introduce bias into AI models in a number of ways, but there are steps organizations can take to mitigate that.

    AI, in a way, is inherently biased. As a veteran AI researcher recently explained: “AI refers to the process by which machines learn how to do certain things, driven by data. Whenever you do that, you have a particular dataset. And any dataset, by definition, is biased, because there is no such thing as a complete dataset.”[1]

    The fault, dear reader, is not in our tools but in ourselves.

    Human Bias Naturally Leads to AI Bias

    Students of brain science know bias is one of the shortcuts the human brain developed over the course of millions of years of evolution. In the wild, so to speak, bias gives us the ability to react swiftly without conscious thought, probably averting countless tragedies. But in the modern world, biases we aren’t even aware of can lead to discrimination, favoritism and other social ills.

    Now, IT teams — including in cybersecurity — are ceding human control to some Terminator-esque overlords in the form of machine learning AI models. Without the proper insights and controls in their development process, they’re likely to build the same conscious or unconscious prejudices and presumptions that influence human behavior into the capabilities of the resulting AI software. As a result, an AI-enabled cybersecurity solution is only as good as the people that develop it.[2]

    And AI bias in cybersecurity can be downright dangerous. Indeed, once cybercriminals recognize a flaw in a biased AI system they can exploit it by tricking the automation tool into focusing on non-critical threats and overlooking a real one. An AI model based on false assumptions or bias will not only threaten a company’s security posture, it can impact the business, as IBM Security vice president Aarti Barkar wrote.[3] And because a biased AI-powered solution may appear to work as well as an unbiased one, there are no red flags to alert the organization to its malfunctioning until it may be too late.

    Thus, as cybersecurity leaders and vendors integrate more AI capabilities into their cybersecurity functions, they must understand the types of bias that can be inadvertently (or even intentionally) baked into solutions and work to prevent or correct harmful predispositions.

    The Main Drivers of AI Bias

    Bias can penetrate the process at various points in an AI implementation cycle. The main drivers of potential AI bias include:

    The data. Data is the most logical place to start when talking about bias in AI.[4] When source data lacks diversity or completeness, for example, a machine learning algorithm will still perform. However, it’s decision making will be skewed. A biased spam detection tool can produce false positives and block non-spam emails, for example. Some experts advise that training data for cybersecurity applications should be largely untouched and uncategorized, and note that organizations should take care when using third party data that may not be relevant to their specific cybersecurity needs. There are open source tool kits, like Aequitas, which can measure bias in uploaded data sets and Themis–ml, which can reduce data bias using bias-mitigation algorithms.[5]

    The algorithm. We’ve all heard a lot about algorithmic bias: when data scientists build AI models influenced by their own unconscious ideas or experiences. It’s critical that security experts work hand-in-hand with data scientists to design algorithms in the context of the business need.[6] There are processes organizations can put in place to inhibit the development of skewed models, such as ensuring that the team has deep domain and cybersecurity knowledge and experience and conducting a third-party code review. Again, there are tools that can help as well, like the IBM-developed AI Fairness 360 which combines a number of bias-mitigating algorithms to detect problems in machine learning models.[7]

    The cyber AI team. The team developing the models and tools should have strong security experience, current understanding of the threat landscape and business knowledge — but also diverse backgrounds and mindsets. Cognitive diversity and a variety of backgrounds and experiences are essential to creating more well-rounded AI cybersecurity systems capable of understanding a wide range of behavioral patterns and threats.

    The Bottom Line

    In an era of growing cyber risk, AI is becoming invaluable to cybersecurity defenses and threat intelligence. Understanding the risks of AI bias and remaining vigilant in preventing the introduction of bias into AI-enabled security solutions will be critical in ensuring that these tools are not only functional, but impartial and effective.

    [1] “AI bias is an ongoing problem, but there's hope for a minimally biased future,” TechRepublic

    [2] “What Is Biased AI, and How Does It Apply To Cybersecurity?,” Technology.org

    [3] “AI is changing cybersecurity—but when it’s biased, it’s dangerous,” Fast Company

    [4] “Engineering Bias Out of AI,” IEEE-Spectrum

    [5] Ibid.

    [6] “AI is changing cybersecurity—but when it’s biased, it’s dangerous,” Fast Company

    [7] “Engineering Bias Out of AI,” IEEE-Spectrum

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Haut de la page