Threat Intelligence

    Reining in the Cyber Risks of Workplace AI Adoption

    AI capabilities have the potential to transform the workplace, but their unmitigated use by employees can open up organizations to a world of risk.

    by Stephanie Overby
    78BLOG_1.jpg

    Key Points

    • A majority of employees have used generative AI tools at work, but only a quarter of organizations have an AI policy.
    • Unsanctioned or unsupervised use of new AI platforms can lead to unintentional data exposure and breaches.
    • There are steps companies can take to put guardrails around AI usage.

    Interest in artificial intelligence (AI) has skyrocketed over the last year as generative AI tools like ChatGPT leapt onto the scene with potential applications for a growing array of business use cases. But there’s more than just idle fascination with these capabilities: Workplace adoption of generative AI is growing rapidly. More than half (56%) of U.S. employees say they use generative AI tools on the job at least occasionally and nearly one-third (31%) use them on a regular basis, according to a recent Conference Board survey.[1]

    AI capabilities in general — and generative AI tools specifically — have the potential to transform the workplace. Productivity improvements resulting from generative AI could add the equivalent of $6.1 trillion to $7.9 trillion annually to the global economy, according to a recent report from McKinsey.[2] At the same time, though, McKinsey and many others have documented the big potential risks that generative AI brings into enterprises along with that anticipated productivity boost.

    And that duality makes it a big problem — particularly for those concerned with cybersecurity — that so much of today’s workplace AI adoption is undocumented, ungoverned, or even unknown. Only around one-quarter (26%) of respondents to the Conference Board survey said their organizations have an AI policy. Shadow AI — AI tools or systems being used or developed without organizational approval or oversight — puts organizations at risk. 

    In marketing, for example, there’s a demonstrated need for help to parse tremendous volumes of data. AI tools can help marketers find the signal among the noise. Mimecast CMO Norman Guadagno has a unique perspective on this as someone whose team can benefit from advances in AI but who also has a front row seat to the evolution of the cyber threat landscape. “It’s really easy for everyone to jump in and start using all sorts of AI tools without giving a second though to the security implications and potential security leaks to their organization,” Guadagno explained on a recent episode of the CMO Insights podcast hosted by Jeff Pedowitz. “But we’re going to see bad actors use AI as a way to try to penetrate organizations.”

    That’s why Guadagno conducted a survey of the AI marketplace to understand what was out there and then developed an AI policy that requires marketing team members to get approval before using new AI tools. “Every company of any size should have a centralized approach to how it’s going to use and test AI tools,” Guadagno told Pedowitz. “If you don’t have a centralized perspective on this, including [input from] legal or compliance teams, you’re potentially putting your business at risk.”

    How Employee AI Usage Expands the Attack Surface

    There’s no denying the opportunities that AI capabilities will create in terms of productivity, innovation, and growth. The most common uses for generative AI today, according to the Conference Board survey, are drafting written content (68%), brainstorming ideas (60%), and conducting background research (50%). The McKinsey report noted that the majority of generative AI’s value will come from use cases in the areas of customer operations, marketing and sales, software engineering, and research and development.

    But what happens to the information shared in queries submitted to a public generative AI tool, for example? One major electronics manufacturer earlier this year discovered three separate instances of employees inadvertently leaking a variety of sensitive company information — a confidential business process, internal meeting notes, and source code — through the use of generative AI tools on the job.[3] Data shared to the major public generative AI tools is used for ongoing training of these large language model (LLM) platforms so they perform better over time. But do you trust your trade secrets or pricing models to a third party with no contractual obligation to protect such important data?

    That’s just one of the data protection issues that emerges when employees experiment with AI. Unbridled use of AI tools can introduce a range of potential cyber risks for companies, including:

    • Data exposure: AI systems depend on enormous volumes of data, some of it highly sensitive or personal. Without proper protection in place — data encryption, access controls, and secure data storage — a company risks exposing that data to unauthorized parties. As Neil Thacker, CISO for EMEA and Latin America at Mimecast partner Netskope explained to ComputerWeekly, the increased use of generative AI tools in the workplace makes businesses vulnerable to serious data leaks.[4]
    • Data breaches: Cybercriminals follow the money. And money comes from high-value data. As more organizations use generative AI tools, bad actors will seek out ways to intercept any high-value data being shared via these interfaces. “[With] analogous to account takeover (ATO), where a hacker gains access to an online account with malicious intent, hackers seek to gain access to trained AI models to manipulate the system and access unauthorized transactions or PII [personally identifiable information],” Jackie Shoback, co-founder of a venture capital firm that invests in digital identity startups, recently wrote in Forbes. “As the complexity of AI solutions increases, more vulnerability points across models, training environments, production environments and data sets will undoubtedly proliferate.”[5]
    • Adversarial attacks: Another known threat is the intentional manipulation of AI models with bogus input. Experts have long warned of the risk of adversarial attacks on AI systems and platforms designed to corrupt their models and outputs. Still, researchers recently revealed an exploit capable of causing all the major generative AI platforms to go off the rails.[6] 
    • Insider threats: As AI becomes more integrated into business operations, there are more opportunities for those inside an organization to use their access to tinker with algorithms or models for malicious purposes or monetary gain. 

    New Controls for New Threats

    The only surefire way to eliminate the cyber risks associated with AI is to ban its use. While some companies have put a moratorium on specific types of AI, such as generative AI, for now, that’s probably not a sustainable solution for most. Instead, business leaders can craft an approach to business AI adoption and usage that aligns with their own risk profiles and appetites, by taking the following steps:

    • Create an AI steering committee. Establishing a group with representation from IT, cybersecurity, data and analytics, and key business stakeholders is an essential first step. This committee can review the organization’s AI practices and policies, including tool usage, data sharing, and data storage and deletion parameters, and align them with the organization’s enterprise risk profile and tolerances.
    • Conduct a baseline AI risk assessment. Next, it’s important to find out what types of tools and systems have already been adopted in the organization and the specific vulnerabilities this usage could create. Company leaders can prioritize the mitigation or elimination of these risks based on a risk-reward calculation. The AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST) can help company leaders think through the cybersecurity and privacy risks associated with the use of AI systems.[7]
    • Develop a company-wide policy for AI usage. While 46% of respondents who said they use generative AI in the Conference Board survey also said their management was fully aware of their AI use, 34% said their organization had no AI policy (and another 17% didn’t know if there was one). It’s important that companies create — and communicate — enterprise-level rules on the use of AI technologies by the workforce (including what tools are sanctioned, what data can be shared when using public tools, and what disclosures employees must make about any materials produced with the help of AI). For example, a company may prohibit the entry of PII, intellectual property, and systems code into generative AI tools and prevent employees from using any such tools until they have been trained on their use and the risks involved.
    • Set cyber standards for AI tools. When considering the adoption of new AI tools or platforms, companies should fully vet the vendor’s cybersecurity controls and practices. Because AI tools ingest so much data, they are high-value targets for cybercriminals seeking to exploit their vulnerabilities. So, it’s important to know, for example, whether an AI platform is secure by design and what vulnerabilities it might have. “People are going and sharing information with AI tools without an understanding of what will happen to that information,” said Mimecast’s Guadagno. “If you’re entrusting [sensitive data] to systems you don’t actually trust, you’re putting yourself at risk. If you’re logging into systems and you don’t know what the login protocols are, you’re potentially opening your organization to malicious attacks.”
    • Communicate and educate. Companies should be explicit in sharing their policies regarding AI use in the workplace with all employees (and contractors) and educate them about the associated cyber risks. Integrating the subject into regular cybersecurity awareness training modules ensures that everyone remains up to speed on emerging threats and best practices. 
    • Monitor access and user behavior. As ever, CISOs should enforce access controls and look out for anomalies in user behavior to minimize the risks of insider threats.

    The Bottom Line

    There is extraordinary potential for companies to harness generative AI to boost productivity, but that potential is not risk-free. Businesses must proactively assess and address the risks that come along with advanced AI. Armed with greater clarity, an organization and its workforce can more confidently adopt these new capabilities while maintaining the security of company and customer data. Read more about how Mimecast is using different types of artificial intelligence in its own cybersecurity solutions.


     

    [1]Majority of US Workers Are Already Using Generative AI Tools —But Company Policies Trail Behind,” The Conference Board

    [2]The economic potential of generative AI: The next productivity frontier,” McKinsey Digital

    [3]Samsung bans use of generative AI tools like ChatGPT after April internal data leak,” TechCrunch

    [4]ChatGPT is creating a legal and compliance headache for business,” ComputerWeekly

    [5]Managing Privacy And Cybersecurity Risks In An AI Business World,” Forbes

    [6]A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It,” Wired

    [7]AI Risk Management Framework,” National Institute of Standards and Technology 

    Subscribe to Cyber Resilience Insights for more articles like these

    Get all the latest news and cybersecurity industry analysis delivered right to your inbox

    Sign up successful

    Thank you for signing up to receive updates from our blog

    We will be in touch!

    Back to Top