Artificial Intelligence AI

    Building Trust Through Action (Part 1)

    My Year on the Mimecast Responsible AI Council 

    by Brandon Reid

    Key Points

    • In the face of the growing use of AI on both sides of cybersecurity, Mimecast customers had many questions about how we were using AI and how we were protecting their privacy.
    • Mimecast created a Responsible AI Council and later an AI Governance Committee to address these issues.
    • This blog, part one of two, delivers details on the benefits and challenges of properly addressing customer concerns about AI.

    The Customer Problem That Started It All

    A year ago, Mimecast faced a challenge that many organizations grappling with AI adoption encounter: our customers and prospects were asking pointed questions we couldn't answer consistently. They wanted to understand not just what our AI and ML features could do, but how we were protecting their privacy, ensuring compliance, and maintaining the trust they'd placed in us with their most sensitive data: their email communications.

    These weren't theoretical concerns. In sales conversations and customer meetings, I was hearing CISOs and security leaders ask: "How does your AI handle our data? Can we opt out? What measures do you have in place?" We had good answers scattered across different teams, but no unified framework or governance structure to point to. For an organization like Mimecast, where trust is our currency, this gap represented both a risk and an opportunity.

    From Problem to Purpose: Creating the Council

    In October 2024, Mimecast established the Responsible AI Council with a clear mission: create and uphold standards for the secure and ethical development and deployment of AI technologies across our services. But the charter wasn't just about checking compliance boxes. It was about turning AI governance into a strategic differentiator.

    The original Council brought together representatives from AI Data Science, Engineering & Architecture, Product, Marketing, Legal, GCO, Security, GTM, and Sales Enablement. The cross-functional composition was deliberate. AI governance isn't a technical problem or a legal problem or a sales problem. It's all of those simultaneously, and solving it requires perspectives from every angle.

    The Evolution: From Council to Committee

    One of our most important lessons came early: bigger isn't always better. The Council initially grew organically as teams recognized the importance of AI governance, but we quickly found ourselves with too many members and inconsistent attendance from certain groups. This created a paradox. The more people we included to ensure comprehensive input, the harder it became to make decisions and drive action.

    By October 2025, we had refined the structure significantly. The Responsible AI Council evolved into the AI Governance Committee with a more focused membership and clearer accountability. We moved from monthly ELT updates to quarterly cadence, signaling maturity in our processes. Most importantly, we expanded our scope beyond just AI in our Services to include AI use throughout the entire organization. We recognized that we needed to practice what we preached.

    The charter evolution tells the story: we moved from "facilitating responsible AI in Services" to "executing strategic direction and acting as an ultimate approval body for responsible AI across Mimecast." This wasn't just semantic. It reflected a fundamental shift in how we thought about AI governance, from reactive oversight to proactive leadership.

    Cross-Functional Impact: Where the Magic Happens

    The cross-functional nature of the Committee has been transformative in ways I didn't initially anticipate. Some examples:

    The Vendor POC Process

    When teams wanted to trial new AI tools, we used to have ad-hoc evaluations that could drag on for months. Now we have a two-week SLA process that brings together Procurement, Legal, Security, IT, and AI teams. We've evaluated multiple AI coding assistants, documentation tools, sales enablement platforms, and enterprise AI solutions using this framework. It's fast, thorough, and ensures we're making decisions that align with both innovation goals and governance requirements.

    Practicing What We Preach: Using Mimecast Incydr for AI Governance

    One of the most powerful realizations from our cross-functional discussions came when our Security team proposed using our own Insider Risk Management solution, Mimecast Incydr, to monitor and govern AI tool usage across the organization. This wasn't just about dogfooding our own product. It was about applying the same rigor to our internal AI governance that we recommend to our customers.

    Mimecast Incydr gave us visibility into employee interactions with AI tools that we simply didn't have before. We configured it to detect when employees were uploading sensitive data, code, or customer information to unsanctioned AI platforms. But rather than simply blocking these activities, we took a more nuanced approach that reflected our culture of trust and enablement.

    We implemented what we call "AI nudges". These are real-time interventions that educate employees about risk without immediately blocking their work. When Incydr detects someone attempting to use an unsanctioned AI tool or uploading sensitive information to an approved tool, it provides immediate feedback about why this creates risk and directs them to approved alternatives. The results exceeded our expectations: a 31% reduction in unsanctioned AI usage incidents in a single month, dropping from 117 to 81 incidents month-over-month.

    What makes this approach powerful is that it's not punitive. We're not creating a culture of surveillance. We're creating a culture of informed decision-making. Employees understand why certain AI tools are approved and others aren't. They see the governance framework as protecting both them and our customers, not restricting their ability to innovate.

    We're now implementing "speedbumping", a more sophisticated layer where Incydr can pause potentially risky actions and require explicit acknowledgment of the risk before proceeding. This preserves employee autonomy while ensuring that high-risk actions are taken with full awareness of the implications. It's the digital equivalent of a surgeon's timeout before a procedure: a moment to verify that everyone understands what's about to happen and why.

    The use of our own Insider Risk Management technology for AI governance has become a compelling proof point in customer conversations. When CISOs ask how we manage AI risk internally, I can walk them through our exact implementation: the policies we've configured in Incydr, the balance we've struck between security and productivity, and the measurable outcomes we've achieved. This isn't theoretical guidance. It's a working example of how to govern AI at scale while maintaining a culture of innovation.

    Customer-Facing Transparency

    The combination of Legal, Product, and GTM perspectives led us to develop internal FAQs for AI use in select Services and create a clear process for handling customer opt-out requests. Our Marketing team worked with Legal to update our Trust Center with detailed information about our use of Generative AI in product development. These weren't mandated by regulation. They were driven by our collective understanding of what customers needed to maintain confidence in our solutions.

    Measuring What Matters: Our Wins

    In February 2025, we conducted a baseline AI knowledge survey. The results became our north star for measuring progress:

    AI Fluency: From Awareness to Mastery

    We discovered that 88% of employees were familiar with AI tools, but adoption was uneven and competency varied widely. We launched a comprehensive, multi-tiered training program with clear certification pathways that recognized different levels of expertise. The results have been remarkable:

    • 72% of Mimecast employees are now certified in GenAI Fundamentals, establishing baseline literacy across the organization
    • 30% have achieved Proficient certification, demonstrating advanced capability to leverage AI for complex workflows
    • 20% have reached Mastery level, capable of building sophisticated AI workflows and mentoring others
    • Over 400 new hires have completed AI induction as part of onboarding, ensuring AI fluency is embedded from day one

    What makes these numbers particularly meaningful is their progression. We're not just training people to check a box. We're building a ladder of competency that allows employees to grow their AI capabilities over time. The 20% at Mastery level are becoming internal champions who develop use cases, mentor colleagues, and push the boundaries of what's possible with our sanctioned AI tools.

    Universal Adoption Across All Departments

    Perhaps our most significant achievement has been achieving true organizational buy-in. We now have:

    • 95% of employees actively using GenAI tools, representing nearly universal adoption
    • 92% active users on a trailing three-month basis, demonstrating sustained, habitual usage rather than just initial experimentation
    • 100% department participation, with every single department having developed and deployed AI use cases

    That last statistic is particularly telling. AI adoption isn't concentrated in Engineering or Product. It's genuinely enterprisewide. Finance is using AI for analysis and financial modeling. HR is leveraging it for candidate screening and employee communications. Sales is using it for proposal generation and competitive intelligence. Marketing is applying it to content creation and campaign optimization.

    This universal adoption validates our governance approach. By providing clear guardrails, approved tools, and comprehensive training, we've made it safe and easy for every department to innovate with AI. The governance framework didn't slow adoption. It enabled it.

    Productivity Gains at Scale

    Our 92% sustained active usage rate is translating to three to 10 hours per week in productivity gains per user. We're tracking toward approximately 100,000 hours saved through GenAI use by year-end. These aren't just efficiency metrics. They represent real capacity freed up for higher-value work. When you multiply those hours across an organization with 95% adoption, the compound effect on productivity is substantial.

    The Sanctioned AI Tool Win

    Perhaps our most tangible accomplishment was securing approval for enterprise-wide GenAI tools with proper governance frameworks. The measurable increase in sanctioned tool usage (95% active usage) paired with the dramatic decrease in unsanctioned tools (31% reduction in incidents) demonstrates that employees want to innovate responsibly. They just need clear paths to do so.

    Industry Recognition

    In January 2025, Mimecast became the first cybersecurity company to achieve ISO 42001 certification for AI Management Systems. This wasn't just a badge to display. It validated our framework and gave our sales teams concrete evidence of our commitment to responsible AI governance. We're currently undergoing audit and recertification, demonstrating this is an ongoing commitment, not a one-time achievement.

    The Council as a Trust Builder

    Beyond the internal operational improvements, one of the most significant outcomes has been the impact on customer conversations. The mere existence of the Responsible AI Governance Committee, and our willingness to discuss its work transparently, has become a powerful talking point that helps customers feel comfortable partnering with Mimecast and adopting our AI-powered solutions.

    In customer meetings, when prospects express concerns about AI governance, I can now describe our Committee's composition, charter, and decision-making process. I can walk them through our ISO 42001 certification journey, explain how we use our own Insider Risk Management platform to govern AI internally, and share metrics that demonstrate our commitment: 72% fundamentals certification, 20% at mastery level, 100% department participation, and 95% active tool usage. This isn't marketing collateral. It's a genuine operational framework that customers can evaluate and trust.

    The transparency itself is the differentiator. When customers see that we've invested in cross-functional governance, that we're willing to discuss both our successes and our lessons learned, and that we hold ourselves to the same standards we recommend for them, it fundamentally changes the conversation. We're not just vendors selling AI-powered security solutions. We're partners who understand the governance challenges because we're solving them ourselves.

    This trust-building effect has been particularly valuable with highly regulated industries like financial services and healthcare, where AI governance isn't optional. It's a prerequisite for adoption. Being able to point to our formal governance structure, our measurable outcomes, and our industry-leading certification gives these risk-conscious organizations the confidence they need to move forward with our AI-powered solutions.

    Stay Tuned for Part 2

    In this first part, I’ve given you the history behind why we created the entity that would eventually become known as Mimecast’s AI Governance Committee, the challenges that we addressed, and some of our stellar outcomes. In the second part, we’ll delve into Questions CISOs Should Ask Their AI Security Vendors, Lessons Learned: What I’d Do Differently, as well as Looking Forward: The Next Chapter. 

    Si abboni a Cyber Resilience Insights per altri articoli come questi.

    Riceva tutte le ultime notizie e le analisi del settore della cybersecurity direttamente nella sua casella di posta elettronica.

    Iscriviti con successo

    Grazie per essersi iscritto per ricevere gli aggiornamenti del nostro blog

    Ci terremo in contatto!

    Back to Top