Is the security efficacy of secure email gateways basically the same?
This is a question a lot of email administrators and security professionals have asked for years. For many of these folks there is presumption that the answer is “yes.” How do they come to this conclusion?
Without rigorous, independent testing, coming to this view has generally depended on personal experience, anecdotes, word-of-mouth, or just assumption. None of these are very scientific methods to determine anything! But what can you do without a proper comparative test?
Mimecast has been very aware of this efficacy conundrum for secure email gateways (SEGs) for a long time. How does anyone, including Mimecast, know if a security service is effective, either absolutely or relatively? For the past 18 months we have been pushing on this issue through the execution, analysis, and publishing of our quarterly ESRA reports and infographics.
What are Email Security Risk Assessments?
Email Security Risk Assessments (ESRAs) are security tests run with organizations that have non-Mimecast email security systems as their incumbent. In an ESRA the Mimecast service is used to passively reinspect their actual delivered mail that had been inspected by their incumbent email security system. The Mimecast analysis then characterizes this delivered email as being clean, spam, carrying malicious files, or being risky impersonation emails.
The ESRA tests have shed a lot of light on the above conundrum from the point of view of the Mimecast service, but are certainly not as definitive as a test which would compare multiple SEGs against the same set of email-borne threats, at the same time.
This type of simultaneous, comparative test is what UK-based SE Labs of has conducted and recently released! In my opinion, this is as close to a definitive test of the efficacy of SEGs that one can devise. While there is no such thing as a perfect test, this test closely simulated reality in that it used a range of email borne threats, including targeted attacks, widely seen public attacks, and incorporated many legitimate messages.
I encourage you to read the report and consider if you agree with the scoring system (such as for false positives vs. false negatives) and other aspects of the testing program.
While I can quibble (and I did) with some aspects of the test, overall, I think it very fairly answers the question posted at the start of this blog: “is the security efficacy of secure email gateways basically the same?” The answer: no, there is wide variance!
To learn more about how to combat email-borne threats like this and others, connect with us at Black Hat USA 2018 in Las Vegas this week. Here’s how.