Nicolas Blank

In chapter one of “Microsoft Exchange 2013: Design, Deploy and Deliver an Enterprise Messaging Solution”, we talk about constraints that may be forced upon us when designing Exchange. One of these constraints may be that we must use either existing hardware or the incumbent virtualization solution. Existing hardware can be a bear of an issue, since if the sizing doesn't fit the hardware, then you don’t really have an Exchange deployment project anymore.

However, virtualization carries with it the promise of over committing memory, disk and CPU resources, which are features deployed by most customers taking advantage of virtualization technologies. Note that over committing anything in your virtualization platform when deploying Exchange is not only a bad idea, it’s an outage waiting to happen.

Virtualization is not free when it comes to the conversion of physical hardware to emulated virtual hardware, and the figures vary between vendors, however you may be looking at a net loss in the range of 5-12 percent across the entire guest’s performance. Coming back to constraints, let us assume your customer – or your company – requires you to virtualize and use VMware as the chosen hypervisor.

Once you've taken into account that you’re virtualizing, you then need to size your guest, as if you’re sizing the real world equivalent of the server. Let’s assume that for arguments sake you end up requiring four cores per server, but you allocated eight, since more cores never hurt anyone, right?

You read the prevailing guidance carefully so you decide to use an existing blade with eight cores, and allocate another eight cores to Exchange, bearing in mind that you've already allocated eight cores to two other applications on the same blade. No fuss, you may think, the guidance states that you should allocate no more than two virtual cores, per physical core. Since you’re a conscientious SysAdmin, you've benchmarked CPU usage on the VMware host and decided that the values are acceptable.

Now it turns out that for some reason Exchange seems to run non-optimally. You decide then to move Exchange to another blade with more CPU's and double the core count within the guest from eight to 16 CPU's, since more CPU's never hurt anyone, right?

Turns out that the expectation for a linear increase in performance is not fulfilled…where do you turn to next?

A good place to start may have been the vendors specific guidance pertaining to Exchange, in this case VMware supplies the Exchange 2010 best practices guide, which states (emphasis added)

Consequently, VMware recommends the following practices:

  • Only allocate multiple vCPUs to a virtual machine if the anticipated Exchange workload can truly take advantage of all the vCPUs.
  • If the exact workload is not known, size the virtual machine with a smaller number of vCPUs initially and increase the number later if necessary.
  • For performance-critical Exchange virtual machines (production systems), the total number of vCPUs assigned to all the virtual machines should be equal to or less than the total number of cores on the ESXi host machine.

While larger virtual machines are possible in vSphere, VMware recommends reducing the number of virtual CPUs if monitoring of the actual workload shows that the Exchange application is not benefitting from the increased virtual CPUs. For more background information, see the “ESXi CPU Considerations” section in the white paper Performance Best Practices for VMware vSphere 5 (

Before “consequently”, the guide briefly introduces VMware's Virtual Symmetric Multi-Processing model, as well as detailing a wait state known as “ready time”. Ready time is the metric revealing why your Exchange workloads in VMware are not benefiting from more processors, assuming that “Ready Time” is consistently high (more than 5%).

The consequence of throwing more vCPU's at a guest than required is that the guest spends more time in “ready time” than is required, as the hypervisor waits for ALL underlying cores which it believes are available to the guest to become available to execute instructions. In other words, the guest OS is ready to process instructions on the processor, however the hypervisor is forcing the guest to wait until all the physical cores are available. This state becomes much worse, as the ratio of vCPU's to physical CPU's increases.

In several chapters we make reference to the Windows Server Virtualization Validation Program (SVVP) and guide you to make sure that your chosen virtualization platform is listed and supported.

VMWare is listed as supported for multiple versions of Exchange and Windows Operating Systems however your server performance is still bad. Does that mean it’s a bad hypervisor?

Well, no.

The point is that VMWare is not a bad hypervisor but not understanding how VMWare allocates CPU resources, as well as not following VMWare's guidance will result in poor performance for your Exchange servers.

Had you followed the guidance, you would have chosen to start with fewer CPUs (you needed four) instead of more CPUs, and following VMware guidance (reducing the number of cores), you would have ensured that you would have allocated one vCPU per physical CPU, thus leading to gratifyingly low ready states.

This is another stark reminder of the fact that we need to read relevant documentation as part of our planning process. All relevant documentation, not just that of the new software we are planning to use.

Nicolas Blank has more than 15 years of experience with various versions of Exchange, and is the founder of and Messaging Architect at NBConsult. A recipient of the MVP award for Exchange since 2007, Nicolas is a Microsoft Certified Master in Exchange and presents regularly at conferences in the U.S., Europe, and Africa.

Nicolas will be running a two day Mimecast Exchange’ training event on the 31st of October and the 1st of November at Microsoft’s Cardinal Place in London. For your opportunity to win a place at the event, please read this blog post about the event

(Image courtesy of Random Tony)


You may be confused by such a brash statement heading up this blog post, but like all things brash, a little context goes a long way.

Recently I had a customer engagement, which involved a broken OWA installation on an Exchange 2013 RTM server. Again nothing too unusual…OWA can break occasionally, and since it has a number of dependencies on a number of components, may on occasion, well, break. As a result, there can be a number of factors influencing why this OWA broke, however before we delve too deeply into the innards of Exchange 2013, and how and why OWA may break let us consider one more thing which I haven’t mentioned yet: Exchange 2013 in this instance was installed on a domain controller.

Those of you in the know may sigh deeply at remembering the lost hours, reading logs, chasing events, recycling application pools and IIS while simultaneously hopping on one leg. Troubleshooting IIS based applications on domain controllers is difficult. Fact. Troubleshooting Exchange installed on a domain controller is compound difficult.

In Chapter 13 of “Microsoft Exchange 2013: Design, Deploy and Deliver an Enterprise Messaging Solution” we cover a range of issues pertaining to Exchange, including preparing Active Directory. Active Directory is a very necessary and valuable repository of configuration information, while simultaneously providing services such as authentication and access. One of the things I’d like to point out in this post, is that while not all domain controllers are evil, the ones with Exchange installed on them decidedly are. Domain controllers should serve one function only, to be domain controllers.

But don’t just believe us, let’s review a Microsoft statement about the topic, titled “Installing Exchange on a domain controller is not recommended”. The article under this heading makes the following points:

If you install Exchange 2013 on a domain controller, be aware of the following issues:

  • Configuring Exchange 2013 for Active Directory split permissions isn’t supported.
  • The Exchange Trusted Subsystem universal security group (USG) is added to the Domain Admins group when Exchange is installed on a domain controller. When this occurs, all Exchange servers in the domain are granted domain administrator rights in that domain.
  • Exchange Server and Active Directory are both resource-intensive applications. There are performance implications to be considered when both are running on the same computer.
  • You must make sure that the domain controller Exchange 2013 is installed on a global catalog server.
  • Exchange services may not start correctly when the domain controller is also a global catalog server.
  • System shutdown will take considerably longer if Exchange services aren’t stopped before shutting down or restarting the server.
  • Demoting a domain controller to a member server isn’t supported.
  • Running Exchange 2013 on a clustered node that is also an Active Directory domain controller isn’t supported.

We recommend that you install Exchange 2013 on a member server.

Let’s recap: Active Directory domain controllers by themselves are decidedly not evil, however the domain controllers that have Exchange installed, with bits of Exchange that no longer work, decidedly are. One of the things that may happen is that Exchange and Active Directory may compete for resources on the same machine, with unpredictable results. Troubleshooting Exchange installed on a domain controller is significantly more difficult than troubleshooting Exchange installed on a member server. Lastly, if you install Exchange on a domain controller, you cannot demote the domain controller without installing Exchange.

For those of you considering installing Exchange 2013 on a domain controller, beware. Once Exchange and Active Directory are combined on the same machine, domain controllers may become evil.

Nicolas Blank has more than 15 years of experience with various versions of Exchange, and is the founder of and Messaging Architect at NBConsult. A recipient of the MVP award for Exchange since 2007, Nicolas is a Microsoft Certified Master in Exchange and presents regularly at conferences in the U.S., Europe, and Africa.

Nicolas will be running a two day Mimecast Exchange’ training event on the 31st of October and the 1st of November at Microsoft’s Cardinal Place in London. For your opportunity to win a place at the event, please read our previous blog post


The Exchange SP2 Hybrid Configuration Wizard simplified the Office 365 configuration steps massively, however it may not work behind a proxy server. The proxy server settings in Internet Explorer are often used by programs attempting to find a route to the internet; however this does not guarantee internet access to all installed software. Exchange 2010 very often assumes a "direct connection", which does not imply that Exchange is connected directly to the internet, but that it is able to connect without hinderance. Modern firewall software and proxy servers can very often accommodate this scenario, however this article deals with the scenario where this may not be possible at all, and the assumed connection fails outright. The Exchange SP2 Hybrid Configuration Wizard runs in the same context as the system and thereby assumes that it can connect directly to the internet, as well as blatantly ignoring all Internet Explorer Proxy Settings. The Hybrid Configuration Wizard does a number of things when it first starts up. First it generates a new Self Signed Certificate for the federation trust. No internet connectivity required there. All subsequent steps, including the creation of the federation trust to the Microsoft Federation Gateway fail immediatelly.   Starting with Server 2008 a number of network configuration settings are resolved best using NETSH. The same applies here.   NETSH is a command line utility able to modify a server or workstation network configuration, without requiring a GUI, and setting or resolving configuration items which cannot be set in a Windows GUI. NETSH may be run interactively or as a command line utility. From a command prompt type NETSH and hit Enter. The following commands run in sequence will show NETSH running interactively as well as displaying the systems proxy settings.   netsh winhttp show Proxy 1 1 1 The same may be achieved using a one liner from the command prompt: netsh winhttp show proxy Setting the proxy server is just as simple as querying it. winhttp set proxy <local> Adding the <local> parameter at the end bypasses the proxy for local addresses. If <local> is omitted, all calls are routed via the proxy, including local PowerShell. The internet Explorer proxy settings may be imported as follows: netsh winhttp import proxy source=ie Lastly proxy settings may be cleared using: netsh winhttp reset proxy Assuming the proxy settings are correct, the New Hybrid Configuration Wizard may be run again and connects to the internet successfully. I hope that this little piece of knowledge helps save you some time in troubleshooting your environment.


Bring Your Own Device (BYOD) is the current trend of literally bringing your own devices to work. This may include a smartphone, tablet or laptop.

Often the mere thought of BYOD can make an enterprise security officer nervous. How nervous? Data breach kind of nervous. Before we join the chorus of security officers and auditors crying out for the ubiquitous deployment of forced mobile management conditional network access, and more, let’s have a closer look at BYOD and Exchange.

Mobile device access to Exchange is not new. Exchange mobile protocols are designed to be secure out of the box, yet many of us have lived through the frustration of educating a customer about the self-signed certificates used to bootstrap an Exchange deployment. In fact, Exchange 2007 is known for being the version of Exchange that caused vast slews of ITPro’s to learn about various types of certificates, and the order of the names appearing on them. Point in case, Exchange mobile protocols are secured by design.

Moving on from the protocol stack and onto the physical access method. We’re not going to spend a lot of time on this point, except to point out, that BYOD tend to use wireless access methods of varying degrees of security. If this layer of physical security is breached, then the attacker is still required to break the encrypted protocol tunnel between the device and Exchange. This is no different to monitoring traffic on a physical Ethernet switch, the result is still encrypted garbage.

Our next point of examination is storage. If the BYOD device is a laptop, the data store tends to be the offline cache file created by Outlook, i.e the OST file. This file is encrypted and useless without the user authenticating onto the device using the correct mail profile. Other devices, including tablets and mobile phones implementing the Active Sync protocol implement similar storage mechanisms, secured by the user authenticating onto the device such as a Pin Lock and then the email account in question.

Exchange 2010 Features a number of remote management tools, including the ability to wipe devices remotely, however remote wipe is just the tip of the management iceberg.

Active sync management policies and the built-in management features allow an organization to structure mobile security granularly, such that different users receive different security policies.

Mobile device management tools augment the security which we’ve discussed so far, by adding a layer of auditability, remote management, tracking and wiping amongst other features, which can help mitigate the risk of data loss, if the device is lost or stolen and the users passwords (device and Exchange) are known.

I’d like to argue that BYOD is often no less secure than the average corporate laptop, due to the security features built into Exchange  and the devices themselves. Exchange is designed to be implemented securely, and features mobile management features in the platform. While those features may not be enough to fill every compliance or security requirement under the sun, they are a massive part of ensuring that BYOD security fears, may be overrated.