What if machine learning tracked every interaction a person experienced and allowed “peers” to rate individuals on each interaction? Those ratings would then get averaged into a scoring system that ultimately decides the subject’s trust level. That’s the premise for the streaming program, Black Mirror’s premier episode on Netflix, called “Nosedive.”

    “Nosedive is already upon us,” said Taylor Lehmann, founder and partner at SideChannel and Cyber Resilience Think Tank member.

    In October, Lehmann and a dozen more members of the Cyber Resilience Think Tank, an independent group of cybersecurity leaders sponsored by Mimecast, met to discuss artificial intelligence (AI) and machine learning’s impact on cybersecurity, including pondering the question:

    How do you balance the trade-off between privacy and security?

    Lehmann was right – that episode’s portrayal of AI bias isn’t unfamiliar. In June, Youtube was sued for discrimination on the basis that “AI, algorithms and other sophisticated filtering tools profile and target users for censorship based ‘wholly or in part’ on race.”

    Nosedive is already upon us

    Taylor Lehmann, SideChannel

    The Chinese government has gone a step further, creating a nationwide social credit system. The system, established in late 2013, gives businesses a “unified social credit code” and citizens an “identity number.” Citizens’ (and organizational) behavior is then monitored in all aspects of life.

    In the case of journalist Liu Hu, arrests and fines levied as a result of his work caused the system to deem him “not qualified” to purchase a plane ticket or buy property.

    “There was no file, no police warrant, no official advance notification,” Hu told The Globe and Mail. “They just cut me off from the things I was once entitled to.”


    There is this misunderstanding that AI is a rogue element, like having an anonymous cyber-cop in your system

    Peter Tran, InferSight

    Then there’s the stigma. The primeval threat.

    “There is this misunderstanding that AI is a rogue element,” said Peter Tran, head of cyber and product security solutions at InferSight, “like having an anonymous cyber-cop in your system.” Cue “Black Mirror” theme.

    So, while “Nosedive” hasn’t totally manifested, AI and machine learning are here. And CISOs need to consider machine learning strategies, along with the risk that accompanies those strategies, whether they’re using a vendor to manage, or bringing the “robot” in-house.

    From a cybersecurity standpoint, there are strong benefits in following these recommendations from the Cyber Resilience Think Tank, provided your organization can successfully integrate AI and machine learning capabilities with its broader systems to achieve “ROI.”

    Like with any new technology, it becomes the responsibility of the user, and in an organization, tech and security leaders, to maintain a standard of privacy that stands in balance with security benefits and organizational gains. It’s ultimately those leaders’ job to ensure that in terms of an organization’s customers, employees, partners and other contacts, that AI “do no harm.”

    Ready to choose Mimecast?

    get a quote