Cheap discs require Operational maturity

Let us look at the last one of these, JBOD, or the use of individual disks to store Exchange databases.

Guest Blogger Nicolas Blank is an Exchange MVP and Microsoft Infrastructure Architect specializing in Exchange, Active Directory, architecture, systems management, migration and scripting. Nicolas spends what spare time he has writing, blogging and talking about Exchange and associated technologies. His blog can be found at: http://itproafrica.com 

In the past we've blogged about the use of SATA disks; one post was even cheekily entitled "Give your SAN to the SQL team". That's a great idea but how do we turn storage features into operational reality?

There's no denying that Exchange 2010 offers more storage flexibility than any of its predecessors. Exchange can now deploy on Big, Fast and Expensive storage (AKA SANs), Disk Shelves, comprising RAID SAS, near line SAS or even JBOD Enterprise SATA disks.

In this configuration, an exchange sizing calculation determines the number of IOPS required per server – depending on user activity, divides it roughly into the known IOPS capability of the disk specified, and returns the number of disks required. Each disk then holds a unique database, which is then mirrored over the network to similar servers, with similar disks.

The first hurdle is simply the number of disks. When producing a Highly Available (HA) configuration, each database is mirrored to two or more servers. Medium to large configurations will require many more than 20 disks per server, which means: using mount points to compensate for the lack of available drives.

The next hurdle is standardizing the servers required in order to have identical enough storage and configurations across all servers, in order to create the HA configuration in the first place.

The next consideration is critical. . Assume your build processes are standardised, your servers, storage and HA configurations are identical and as clean as a whistle; How do you know when you've had a failure?

Exchange 2010 – assuming it's been designed and built to do so - is really good at absorbing failures – disk, database, transport, client access, you name it. The downside is – how do you know if something's failed OR you're down to your last good copy after the other two or three databases failed?

The answer involves many co-dependent factors – monitoring software, monitoring personnel and operational procedure. There's no point in deploying SCOM which predicts disk failure, alerts on the disk failure, reports on the reduced SLA, if no-one is consuming the data and actioning appropriately.

For the purposes of this article then, we can define Operational Maturity as the absence or presence of the technology and processes required in order to absorb and mitigate a failure in an acceptable time frame (normally the SLA).

Don't think that this pain is a JBOD pain only. JBOD lessens the storage cost, but reduces SOME of the storage management which the SAN team may absorb and the Exchange administrator may be insulated from.

Irrespective of the storage model used, the combination of monitoring, processes, plans, documentation and activities define the operational maturity of an IT organisation. Due to the criticality of mail, Exchange has massive visibility in the average business, especially during an outage.

While cheap storage is a valid option for Exchange, it may not be a great option for IT shops that are relatively new to the concepts mentioned in this post. Just from a storage point of view, a more traditional RAID storage shelf with lower IOPS may be a better consideration.

FILED IN