Building Trust Through Action (Part 2)
My Year on the Mimecast Responsible AI Council
Key Points
- In the face of the growing use of AI on both sides of cybersecurity, Mimecast customers had many questions about how we were using AI and how we were protecting their privacy.
- Mimecast created a Responsible AI Council and later an AI Governance Committee to address these issues.
- This blog, part two of two, provides the questions CISOs should be asking of their security vendors, what we would do differently in developing the committee now that we’ve been through the process, and what the future of the committee looks like.
In the first part of this two-part blog, I outlined the history behind why we created Mimecast’s AI Governance Committee, which was to help answer the many questions Mimecast customers had about how we were using AI and how we were protecting their privacy. I also delved into the challenges and stellar outcomes that came about during the process of creating the committee.
Now, in the second part, let’s look at some very important questions CISOs should be asking their security vendors, as well as look at the lessons we learned through this entire process, what we would go back and do differently, and what we think moving forward with our next chapter for the committee may look like.
Questions CISOs Should Ask Their AI Security Vendors
Throughout this journey, I've learned that not all AI governance is created equal. As CISOs evaluate AI-powered security solutions, they should be asking tough questions that separate genuine governance from compliance theater. Here are the questions I now ask and answer in customer conversations:
- Do you have a formal AI governance body with executive accountability? Look beyond the existence of a policy document. Who chairs it? How often does it meet? Does it have decision-making authority or is it advisory? Does leadership receive regular updates? A governance framework without teeth is just documentation.
- Which functions are represented in your AI governance structure? AI governance that lives solely in Engineering or Legal misses critical perspectives. Ask if Product, Security, Legal, Compliance, Sales, and Finance have seats at the table. If key stakeholders aren't involved in governance decisions, those decisions won't reflect real-world constraints and customer needs.
- Can you share measurable outcomes from your AI governance program? Governance without measurement is aspiration. Ask for specific metrics: reduction in risky AI usage, employee training completion rates, time-to-decision for AI tool approvals, or audit findings. If they can't quantify their governance effectiveness, they're not managing it seriously.
- How do you use your own security products to govern AI internally? This question reveals whether vendors practice what they preach. If they're selling you insider risk management or DLP solutions for AI governance but not using them internally, ask why. The best proof of a product's effectiveness is the vendor's willingness to bet their own security posture on it.
- What's your process for evaluating and approving new AI tools or capabilities? Ad-hoc evaluation processes that take months signal governance immaturity. Ask about SLAs, cross-functional review processes, and decision criteria. Organizations with mature AI governance can move quickly because they've built repeatable frameworks, not because they're cutting corners.
- Have you achieved any third-party AI governance certifications? Certifications like ISO 42001 aren't just badges. They represent external validation of your governance framework. Ask when they achieved certification, what the audit process revealed, and when they're due for recertification. One-time certifications without ongoing validation suggest governance has stagnated.
- How do you balance AI innovation with governance guardrails? This question exposes whether governance is viewed as enablement or obstruction. Organizations with mature governance can articulate how their frameworks actually accelerate safe AI adoption. If the answer focuses solely on restrictions and controls without mentioning speed or innovation enablement, governance is likely seen as a bottleneck internally.
- What AI fluency training do you provide to your employees, and what are your completion rates? AI governance fails without AI literacy. Ask about training programs, certification pathways, completion rates across different competency levels, and how they measure effectiveness. If employees don't understand AI capabilities, limitations, and risks, even the best governance policies won't prevent dangerous behaviors. Look for organizations that can show progression from fundamentals through proficiency to mastery, not just one-size-fits-all training.
- Can you demonstrate AI adoption across all departments, not just technical teams? Many organizations have pockets of AI excellence but lack enterprise-wide adoption. Ask what percentage of departments have deployed AI use cases and what the active usage rates are. Universal adoption suggests governance is enabling innovation rather than creating bottlenecks. Siloed adoption suggests governance hasn't addressed the needs of all stakeholders.
- How has your AI governance framework evolved over the past year? Static governance in a rapidly evolving technology landscape is a red flag. Ask about charter revisions, membership changes, new policies implemented, and lessons learned. Organizations learning and adapting their governance demonstrate they're taking it seriously, not just checking a box.
- Can you walk me through a recent governance decision your committee made? This question cuts through prepared talking points. Ask for a specific example: What was the decision? Which stakeholders were involved? What factors were considered? How long did it take? What was the outcome? Vendors with real governance can tell detailed stories. Those with governance theater will struggle to provide specifics.
These questions aren't just evaluation criteria. They're a framework for differentiating vendors who take AI governance seriously from those treating it as a checkbox. The vendors who can answer these questions with specificity, metrics, and real examples are the ones who've done the hard work of building governance into their organizational DNA.
Lessons Learned: What I'd Do Differently
Start Smaller, Scale Intentionally
Our initial enthusiasm led to an unwieldy group. The painful process of right-sizing taught us that governance requires both broad input and decisive action. The tension between inclusivity and effectiveness is real, and there's no perfect answer. But being explicit about decision-making authority from day one would have helped.
Measure from Day One
Our February baseline survey was enlightening, but I wish we'd started measuring earlier. You can't manage what you don't measure, and having metrics from the outset would have helped us demonstrate impact more clearly to skeptical stakeholders.
Bridge the Technical-Business Gap Constantly
The most valuable moments in our Committee meetings happen when our Data Science team translates technical constraints into business implications, or when Sales shares customer objections that inform Product decisions. Facilitating this translation requires intentional effort. Don't assume it will happen naturally.
Governance Enables Innovation
Early on, some teams viewed the Council as a potential bottleneck. Reframing governance as an enabler rather than a barrier was crucial. Our two-week vendor POC process actually accelerates adoption by providing clear guardrails and streamlining approvals.
Use Your Own Products
The decision to use Mimecast Incydr for internal AI governance wasn't just operationally smart. It forced us to experience our own product from the customer's perspective. We discovered configuration nuances, identified opportunities for improvement, and gained authentic stories to share with prospects. If you're a security vendor, govern yourself with your own tools. The credibility this creates is impossible to overstate.
Plan for the External Value
While we created the Council to solve internal governance challenges, we didn't fully anticipate how valuable it would become as an external trust signal. If I were starting over, I'd build the external communication strategy alongside the internal governance framework. The stories, metrics, and learnings from your AI governance journey are assets that can differentiate you in the market. Plan to capture and share them strategically from the beginning.
Looking Forward: The Next Chapter
As we move into our second year, we're shifting focus from foundation-building to maturity and optimization. We're evaluating agentic AI platforms, developing Model Context Protocol (MCP) strategies, and refining our measurement frameworks to capture not just time savings but concrete business outcomes.
The Responsible AI Governance Committee has evolved from a reactive response to customer concerns into a proactive driver of competitive differentiation. Our governance framework isn't just about managing risk. It's about building the trust that allows us to innovate faster than competitors who treat AI governance as an afterthought.
For organizations considering similar initiatives, my advice is simple: start with the customer problem, assemble diverse perspectives, measure relentlessly, be willing to evolve, use your own products to govern your AI adoption, and recognize that your governance journey itself is a differentiator worth sharing. AI governance isn't a destination. It's a continuous practice of building trust through action.
The questions we couldn't answer a year ago? Today, I can point to a comprehensive framework, measurable outcomes, industry-leading certification, and our own Insider Risk Management platform actively protecting our AI usage. More importantly, I can point to a cross-functional team that's making those answers meaningful in every customer conversation and customers who are choosing Mimecast because they trust how we govern AI, not just what our AI can do.
Abonnez-vous à Cyber Resilience Insights pour plus d'articles comme ceux-ci
Recevez toutes les dernières nouvelles et analyses de l'industrie de la cybersécurité directement dans votre boîte de réception.
Inscription réussie
Merci de vous être inscrit pour recevoir les mises à jour de notre blog.
Nous vous contacterons !