If there’s one thing we can be sure about it’s that, at some point in the future, almost nobody will manage mailboxes on premises.  The dominant players look like being Microsoft with Office 365 and Google with Google Apps, though of course others may emerge.

Not surprisingly, then, pretty much every CIO in the world has taken a look at these platforms and adopted a stance.  The stance may involve proactive planning now with a rapid migration in mind, or it might be a case of keeping things as they are until the technology matures further.  Or there might be any number of interim steps that will make a migration easier at some point in the future.  I would wager that there is no CIO that hasn’t started thinking about migrating email, in its entirety, to the cloud.

The Road to Office 365 – It’s Not ‘If’ but ‘When’ and ‘How’

For the last few years Mimecast has positioned itself as a companion technology to Microsoft Exchange, optimizing our cloud services to deliver maximum value to on premises or hosted Exchange customers.  And now, of course, we’re also providing services for Office 365 customers, in both cloud-only and hybrid environments.  Of our 9,000 or so customers, almost all of whom are on some form of Exchange, we are seeing a growing number using Mimecast and Office 365 together.  With Office 365, we support very clear use cases that address specific customer needs that can’t be met by Office 365 on its own.  It could be a particular compliance or eDiscovery need, or a desire for a ‘cloud-on-cloud’ High Availability solution to protect against downtime.

Office 365 may be the eventual destination for most businesses, but that doesn’t mean there is a crazy rush to migrate there or indeed that it’s the only short to mid-term option.  For example, we’re seeing the Managed Service Provider (MSP) market booming, as smaller businesses offload their Exchange infrastructures and move to hosted Exchange suppliers.  At the other end of the scale, Exchange 13 is an attractive option for companies who want to keep their mailboxes on-site.  And we’re seeing a fair amount of hybrid deployment, with IT moving a subset of users to the cloud, with an independent archive like Mimecast’s giving them the flexibility to toggle mailboxes back and forth between on premises and cloud as they see fit.

But let’s not kid ourselves.  These are all interim measures, albeit interim measures that will be very profitable for those organizations operating in the space for some years to come.

The point, I guess, is that we’re all preparing for an Office 365 world.  At Mimecast, we are building out and optimizing our Office 365-specific portfolio so the use cases are crystal clear.  It’s not simply a question of offering alternative tools to those that Microsoft includes with its Office 365 SKUs, but showing how we offer additional layers of functionality that support specific customer needs.  That way, over time, we actually see ourselves becoming an accelerator, or enabler for Office 365 adoption, since we effectively remove short-term barriers to adoption.

Naturally, Microsoft is working hard to add functionality of its own and make Office 365 as robust and feature rich as possible.  Many of the ‘gaps’ that Michael Osterman calls out in his paper, Office 365 for the Enterprise: How to Strengthen Security, Compliance and Control, will be filled by Microsoft over the coming years.  So does that mean third parties will find it hard to build businesses within this ecosystem?  No.  In fact, as the platform matures, more use cases will emerge just as happened with Exchange many years ago.

Microsoft will certainly want to make sure that the common elements of customer need are properly served by Office 365 off the shelf, but this is a company, unlike Google, that has always been committed to its partners, and to the creation of a vibrant community of ISVs around its core platforms. Office 365 will be no different, and there will be plenty of room for third parties who can help customers not only see over the short term hurdles, but enjoy a first class, zero compromise cloud experience in the longer term.


Microsoft has changed the way Offline Address Book (OAB) Distribution works over previous versions of the product to remove a single point of failure in the Exchange 2007/2010 OAB Generation design.  While this new method of generating and distributing the Offline Address Book has its advantages, there is also a disadvantage which can result in a breach of privacy especially in multi-tenant environments.  In this article we will be looking over how OAB Generation worked in the past as opposed to how it works now highlighting both the good and the bad.

Clint Boessen MVPBack in May 2009, I published an article entitled “How OAB Distribution Works” which has received a large number of visits and can be found on my personal blog under the following URL link.  This article explains in detail the process behind OAB Generation in Exchange 2007 and 2010 and I highly recommend this read to anyone who is not familiar OAB Generation in previous releases of the product.

If you have not read the above article, let’s quickly summarise.  In Exchange 2007/2010 every OAB has a mailbox server responsible for OAB Generation.  The mailbox server responsible for OAB generation would generate the OAB according to a schedule and place it on an SMB share under \mailboxservernameExchangeOAB.  The Exchange 2007/2010 CAS servers responsible for distributing this Offline Address Book would then download the OAB from this share to a folder advertised through Internet Information Services (IIS).  Outlook clients then discover the path of the IIS website through autodiscover and download the files located under the OAB IIS folder through HTTP or HTTPS.  If you need to gain a more in-depth understanding of this process again I encourage you to read the blog post above.

Now the problem with the above design is every OAB has one Mailbox server hard coded to be the server responsible for performing OAB Generation.  The whole point of Exchange Database Availability Groups is to allow mailbox servers to fail and have databases failover to other mailbox servers which is a member of the same Database Availability Group.  This presents a single point of failure.  In the event the server responsible for generating the OAB was to fail, this OAB generation process would not failover to another server as the OAB is hardcoded to use that specific mailbox server as the OAB generation server.  This means until an administrator brings back the mailbox server which failed or moves the OAB generation process for the specific OAB to another mailbox server, the OAB in question will never get updated.

To fix this in development of Exchange 2013, Microsoft needed a method to allow any mailbox server to fail without disrupting the OAB generation process, after all this was the whole idea behind Database Availability Groups – the ability to allow mailbox servers to fail.  Instead of spending development time on putting together a failover technology around OAB Generation, Microsoft decided to incorporate the OAB Generation process into Database Availability Groups.  This means instead of having one mailbox server generate the OAB and share it out via SMB, the Exchange 2013 server hosting the active mailbox database containing the Organization Mailbox is now the server responsible for generating the OAB.  In fact in Exchange 2013, the OAB is now stored in an Organisation Mailbox so in the event a mailbox server fails or a database failover occurs, the OAB will move along with it.  This architecture change has removed the OAB generation single point of failure which caused problems for organisations in previous releases of the product.

Whilst Microsoft removed the single point of failure from the generation process of the OAB, they introduced a problem with the distribution process.  In previous releases there was a service running on CAS servers known as the Exchange File Distribution Service, a process which downloaded a copy of the OABs from various mailbox servers performing the OAB Generation task and placed the OABs in a web folder available for clients to download.  This allowed companies running multiple OABs to provide NTFS permissions on the OAB folders to restrict who is allowed to download the OAB.  This is especially useful in Exchange multi-tenant environments to ensure each tenant is allowed to only download the address book applicable to their organisation.

In Exchange 2013 Client Access Servers the Exchange File Distribution Service has been removed and the Exchange 2013 CAS now proxies any OAB download requests to the Exchange 2013 mailbox server holding the active organisation mailbox containing the requested OAB. The Exchange 2013 CAS finds which mailbox server this is by sending a query to Active Manager.  As the Exchange 2013 CAS no longer stores each OAB in a folder under the IIS OAB directory, companies can no longer set NTFS permissions on the folders to restrict who has permissions to download each respective OAB. It is also important to note that inside each organisation mailbox there is no means provided for organisations to lock down who can download each OAB through access control lists.  This introduces privacy issues for companies who offer hosted Exchange services as it presents a privacy breach.  Someone who knew what they were doing and has a mailbox within the Exchange environment could download OABs from other organisations and in result gather full list of employee contacts for data mining purposes.  Microsoft’s response to this threat documented in the multi-tenant guidance for Exchange 2013 is for hosting companies to “monitor the OAB download traffic” – in other words there is no real solution to prevent this from happening. 

For more information about the Exchange 2013 OAB distribution process I strongly recommend the following article published by the Exchange Product Team.  

Clint Boessen is a Microsoft Exchange MVP located in Perth, Western Australia. Boessen has over 10 years of experience designing, implementing and maintaining Microsoft Exchange Server for a wide range of customers including small- to medium-sized businesses, government, and also enterprise and carrier-grade environments. Boessen works for Avantgarde Technologies Pty Ltd, an IT consulting company specializing in Microsoft technologies. He also maintains a personal blog which can be found at


In chapter one of “Microsoft Exchange 2013: Design, Deploy and Deliver an Enterprise Messaging Solution”, we talk about constraints that may be forced upon us when designing Exchange. One of these constraints may be that we must use either existing hardware or the incumbent virtualization solution. Existing hardware can be a bear of an issue, since if the sizing doesn't fit the hardware, then you don’t really have an Exchange deployment project anymore.

However, virtualization carries with it the promise of over committing memory, disk and CPU resources, which are features deployed by most customers taking advantage of virtualization technologies. Note that over committing anything in your virtualization platform when deploying Exchange is not only a bad idea, it’s an outage waiting to happen.

Virtualization is not free when it comes to the conversion of physical hardware to emulated virtual hardware, and the figures vary between vendors, however you may be looking at a net loss in the range of 5-12 percent across the entire guest’s performance. Coming back to constraints, let us assume your customer – or your company – requires you to virtualize and use VMware as the chosen hypervisor.

Once you've taken into account that you’re virtualizing, you then need to size your guest, as if you’re sizing the real world equivalent of the server. Let’s assume that for arguments sake you end up requiring four cores per server, but you allocated eight, since more cores never hurt anyone, right?

You read the prevailing guidance carefully so you decide to use an existing blade with eight cores, and allocate another eight cores to Exchange, bearing in mind that you've already allocated eight cores to two other applications on the same blade. No fuss, you may think, the guidance states that you should allocate no more than two virtual cores, per physical core. Since you’re a conscientious SysAdmin, you've benchmarked CPU usage on the VMware host and decided that the values are acceptable.

Now it turns out that for some reason Exchange seems to run non-optimally. You decide then to move Exchange to another blade with more CPU's and double the core count within the guest from eight to 16 CPU's, since more CPU's never hurt anyone, right?

Turns out that the expectation for a linear increase in performance is not fulfilled…where do you turn to next?

A good place to start may have been the vendors specific guidance pertaining to Exchange, in this case VMware supplies the Exchange 2010 best practices guide, which states (emphasis added)

Consequently, VMware recommends the following practices:

  • Only allocate multiple vCPUs to a virtual machine if the anticipated Exchange workload can truly take advantage of all the vCPUs.
  • If the exact workload is not known, size the virtual machine with a smaller number of vCPUs initially and increase the number later if necessary.
  • For performance-critical Exchange virtual machines (production systems), the total number of vCPUs assigned to all the virtual machines should be equal to or less than the total number of cores on the ESXi host machine.

While larger virtual machines are possible in vSphere, VMware recommends reducing the number of virtual CPUs if monitoring of the actual workload shows that the Exchange application is not benefitting from the increased virtual CPUs. For more background information, see the “ESXi CPU Considerations” section in the white paper Performance Best Practices for VMware vSphere 5 (

Before “consequently”, the guide briefly introduces VMware's Virtual Symmetric Multi-Processing model, as well as detailing a wait state known as “ready time”. Ready time is the metric revealing why your Exchange workloads in VMware are not benefiting from more processors, assuming that “Ready Time” is consistently high (more than 5%).

The consequence of throwing more vCPU's at a guest than required is that the guest spends more time in “ready time” than is required, as the hypervisor waits for ALL underlying cores which it believes are available to the guest to become available to execute instructions. In other words, the guest OS is ready to process instructions on the processor, however the hypervisor is forcing the guest to wait until all the physical cores are available. This state becomes much worse, as the ratio of vCPU's to physical CPU's increases.

In several chapters we make reference to the Windows Server Virtualization Validation Program (SVVP) and guide you to make sure that your chosen virtualization platform is listed and supported.

VMWare is listed as supported for multiple versions of Exchange and Windows Operating Systems however your server performance is still bad. Does that mean it’s a bad hypervisor?

Well, no.

The point is that VMWare is not a bad hypervisor but not understanding how VMWare allocates CPU resources, as well as not following VMWare's guidance will result in poor performance for your Exchange servers.

Had you followed the guidance, you would have chosen to start with fewer CPUs (you needed four) instead of more CPUs, and following VMware guidance (reducing the number of cores), you would have ensured that you would have allocated one vCPU per physical CPU, thus leading to gratifyingly low ready states.

This is another stark reminder of the fact that we need to read relevant documentation as part of our planning process. All relevant documentation, not just that of the new software we are planning to use.

Nicolas Blank has more than 15 years of experience with various versions of Exchange, and is the founder of and Messaging Architect at NBConsult. A recipient of the MVP award for Exchange since 2007, Nicolas is a Microsoft Certified Master in Exchange and presents regularly at conferences in the U.S., Europe, and Africa.

Nicolas will be running a two day Mimecast Exchange’ training event on the 31st of October and the 1st of November at Microsoft’s Cardinal Place in London. For your opportunity to win a place at the event, please read this blog post about the event

(Image courtesy of Random Tony)


Friday marked the 14th annual Systems Administrators Day.

Rather than writing another blog post to mark the day, we decided we’d like to do something that really gives back to the people who make our lives a little better every day.

We’re doing this by hosting a free two day Microsoft Exchange 2013 training session for a group of our UK based customer’s Systems Administrators. As with any event, seats are limited, but we think this is such a cool training session we've made ten seats available  to be won from today! (If you're a Mimecast UK customer wanting to know more, please contact your dedicated Customer Account Manager or email the team at

As you'd expect, we are big supporters of the Microsoft Exchange ecosystem which is why we've decided to invite three gentlemen to host the event whose credentials read like a who’s who of Microsoft Exchange experience.

In fact, these three gents have just released an insightful new book titled “Microsoft Exchange 2013: Design, Deploy and Deliver an Enterprise Messaging Solution”.

They are Nathan Winters, Nicolas Blank and Neil Johnson.

They'll be running a two day training event we’re calling the ‘Mimecast Exchange’ with a view to providing a great foundation for SysAdmins looking to migrate to Exchange 2013 from a previous version.

The first day will focus mainly on theory, understanding what you’re moving to, what the benefits are, the changes in architecture and what the migration processes will entail. It will also cover the planning and design elements of a migration.

The second day will be a much more practical day with Nathan, Nic and Neil working through specific scenarios to maximize the relevance of the knowledge for you.

Each attendee will be given a signed copy of their book to make notes in so that you can not only keep track of what is being discussed, but you can also keep a much richer set of context for the notes you’re taking.

So, a free training session delivered by three of the world’s top Exchange experts with a free copy of a great book relevant to any SysAdmin working with Exchange! You’re probably thinking ‘That sounds great, how do I get my seat?’

Basically, we thought it’d be cool to do offer some of the seats via a competition. There are a few ways to get into the running for a seat: -

    • Twitter: Simply follow @Mimecast and tweet about entering the competition using the hashtag #MimecastExchange
    • LinkedIn Group: Simply request to join the group “Microsoft Exchange 2013: Design, Deploy and Deliver” and post a discussion in the group telling us a bit about the versions of Exchange you have and something about your architecture. Please note: Discussions with the most detail will be selected by the three authors to become the focus scenarios for the migration workshops on the second day of the training session. Please remember to use your discretion about potentially sensitive information.
    • LinkedIn Follow: Simply follow the Mimecast Company Page and share a post that has the term “Mimecast Exchange” in it.
    • Blogging: Publish a blog post about your environment and why you would like to attend Mimecast’s training session and link to Please Tweet the blog post using the #MimecastExchange hashtag so we can register the entry. If you aren't on Twitter, then please share the link via LinkedIn and mention @Mimecast so we can register the entry. We are also looking to find scenario’s to choose from in blog posts so let us know about your posts! Remember, the same caution about sensitive information exists here too!

Each entry type is one entry meaning that someone who really wants to attend can have as many as four entries in total.

Please note, full terms and condition are published here

The event is being held on 31st of October and the 1st of November at Microsoft’s Cardinal Place in London and will start at 9:00 AM both days and finish around 5:00 PM. Registration will open at 8:00 AM.

To complement the event we hope the LinkedIn Group we've created will prove to be a valuable ongoing resource for you and all three authors are moderators of the group so you’ll have an opportunity to share and learn from them and other participants whether you attend the event or not. As this is a SysAdmin specific group, we intend to keep this group as an active and thriving resource for people to find quick and relevant answers to questions about Exchange 2013 and the work required to migrate to it and won’t allow the group to be used for promotional or marketing purposes.

So all in all you can see we really are building something to give value back to the community to mark the 14th annual Systems Administrators Day. With your help, next year’s celebrations will mark a real step change in the way our community is working together through events like Mimecast Exchange and forums like the new LinkedIn Group.

I look forward to meeting many of you in October!