What to get ready in your vSphere Cluster design – PART 1

The hardware abstraction tsunami we have witness in the last 5 years is surely far from getting quiet. The growth of infrastructures has far surpassed expectations and the best is yet to come; yes you might be aware of it… you know… the cloud thing…

It is in the plans, believe me on that one, but it heavily relies on what was done in the past and ensuring the infrastructure is rock solid should not be an optional exercise. In my last post (https://florenttastet.wordpress.com/2014/06/29/choosing-hardware-for-use-as-an-esxi-host/) we’ve talk about host design and a few things we should be considering.

Today we’ll review the cluster design, some tricks about it and hopefully it would be a good revision of what was done. It all comes from personal experience along with vExperts inputs and a few folks living the dream day after day.

This is a series of 3 blogs, of a 1000 word or less, your feedbacks are always welcome as they serve our common knowledge and enhance our common expertise.

PART 1 of this post is focussed on Ressource Pools and their hierarchy, DRS and EVC.

PART 2 will cover SRM over vMSC, AutoDeploy, Restart Priority and Management Network.

PART 3 will review Permanent Device Loss Prevention, HA HeartBeat, DPM & WoL and vMotion bandwidth. 

Resource Pools and Resource Reservations

In plain words, always use resource pools to define reservations.

What are ressource pools? 

Resource pools allow you to delegate control over resources of a host (or a cluster), but the benefits are evident when you use resource pools to compartmentalize all resources in a cluster. 

All VM workloads should not be treated equally; a file server does not require the same level of attention from the hypervisor than a Tier1 app such as SQL or Exchange for example. When you create a resource pool you define the level of service expected and better manage the overall resource availability overall in the cluster and specifically for a set of VMs.

In a small or large environment it is a best practice that should be part of any design. As explosions and growth of business are difficult to forecast, putting in place this tactical design will pay off on the long run.

DRS rules to avoid single points of failure

What is DRS?

I personally love DRS. Distributed Resource Scheduler (DRS) is a feature included in the vSphere Enterprise and Enterprise Plus editions. Using DRS, improves service levels by guaranteeing appropriate resources to virtual machines, deploy new capacity to a cluster without service disruption and automatically migrate virtual machines during maintenance without service disruption.

Should you have at any given time a front end and back end of the same host? I say no! When deploying business critical highly available applications make sure to create DRS rules to make sure the two highly available virtual machines do not run on the same host.

DRS itself has an extraordinary benefit to a day to day infrastructure management, but leveraging all features within drives a priceless peace of mind. Knowing that in the even of a disaster within a cluster, a backend is not on the same host that a front end minimise downtimes and optimise SLA. to be strongly considered in my opinion.

DRS affinity rules.

DRS is definitely a sweet fonction. Take the time to look at http://www.vmware.com/files/pdf/drs_performance_best_practices_wp.pdf to get a better idea of the strength of the tool. 

But in blade environments, when spanning a vSphere cluster across multiple blade chassis and using application clustering consider using DRS affinity rules. Why? This prevents all nodes in an application cluster from being on the same chassis in case of failure (hardware failure at the chassis level).

Sounds silly but you need to think about that in Blades environments. You surely wouldn’t want to have all your VM to be within the same chassis (again if you are in an environment that spans across more than one chassis).

I agree it would require a large farm with 100’s of VMs. But keep it in the back of your head.You may also consider pinning your vCenter to a small number of hosts, especially in large clusters.

VMs and Ressource Pools  Hierarchical levels

Once you’ve taken the Resource Pool route (the right route I should add) you need to remain aligned with this strategy. Refer to 


I have often seen architectures where a VM is at the same hierarchical level than a Resource Pool. Please don’t! 

It’s not recommended deploying virtual machines at the same hierarchical level as resource pools. In this scenario a single virtual machine could receive as many resources as a complete pool of virtual machines in times of contention.

We don’t want that. Resource Pools are not consuming resources for say; they are managing the ressources assignments. Group VMs in Ressource Pools, and as a default you may want to create 3 resource Pools: High, Medium and Low. as a start it is a good practice; group VMs in Pools and make sure they are not under the host directly in the cluster.


What is EVC? 

Enhanced vMotion Compatibility (EVC) simplifies vMotion compatibility issues across CPU generations. EVC automatically configures server CPUs with Intel FlexMigration or AMD-V Extended Migration technologies to be compatible with older servers.

From the book, this KB is handy 


Chances are you will not purchase all hosts needed at the same time. You may have acquired a few when you first started, but now comes the time to expand and add a server.

Beside the fact that we need to remain within the same CPU family, fonctions and feature of modern sockets might have calls (SS1,2 or 3 calls for example) that are unknown to older socket generation.

Enabling by default EVC on a vSphere cluster allows new processor architecture (of the same family of course) to be added to a cluster seamlessly.

Let’s say it avoids future, unnecessary  troubleshooting steps.


In this PART 1 a few tricks from the field. They are not major items, but will help from time to time and surely will set the foundation to a growing business and leave you with a feeling of achievement. 

Don’t forget virtualization is here to help; adding a few things here and there will drastically enhance your confidence in your infrastructure and provide an very dynamic environment that can adapt and is ready for bursts or unwanted (but still a reality) failures.

PART 2 will cover SRM over vMSCAutoDeployRestart Priority and Management Network.

PART 3 will review Permanent Device Loss PreventionHA HeartBeatDPM & WoL and vMotion bandwidth. 


About florenttastet

As an IT professional and leader, my objective is to help an organization grow its IT department with new and innovative technologies in order to have production at the most efficient level ensuring the right alignment in the deployment of such technologies through a precise Professional Services results in a extraordinary experience for the customer. Team member of multiple projects, I have developed a strict work ethic allowing development of superior communication skills, as well as the ability to multi-task and meet precise deadlines. As an IT veteran with a broad background in consulting, management, strategy, sales and business development, I have developed an deep expertise in virtulization using VMware and Citrix products with a strong skillset on Storage Arrays (HP, EMC, Netapp, Nimble & IBM). I have also developed a Security practice through CheckPoints NGX R65-R66 (CCSA obtained) and Cisco PIX-ASA product line. Specialties: Microsoft infrastructure products; Monitoring HPOV, SCOM, CiscoWorks, Firewalls: checkpoint, PIX and ASA. Virtualization with VMware (ESX through vSphere & View/Horizon), Microsoft (Hyper-V server, VDI and App-V), Citrix (Xenserver, XenDesktop, Xenapp), Storage (EMC, HP, Netapp, Nimble & IBM), Reference Architectures and Converged Datacenters (vSPEX, Flexpod, vBlock, PureFlex & HP Matrix)
This entry was posted in Cloud, compute power, Converged infrastructures, Datacenter, Engineering, Hybrid infrastructures, memory, Monitoring, platform, processors, rackmounts, Server virtualization, vExpert, vmware. Bookmark the permalink.

3 Responses to What to get ready in your vSphere Cluster design – PART 1

  1. Peter Hunter says:

    “Should you have at any given time a front end and back end of the same host? I say no! ”

    In most cases I would say yes! Since the loss of either the front or back end will usually result in an application outage, placing them on two different hosts doubles the chance of having a hardware related outage. Additionally, keeping them on the same hosts keeps network traffic between the two machines off of the physical network, reducing latency. I like to use affinity rules to keep these types of systems on the same host.

    Good article though. I look forward to the rest of the series.

    • Thank you peterhunter1975@gmail.com for taking the time to share your feedback.

      You are correct that under certain circumstances it may better to keep the front end and back end of the same host since anyways the loss of either one results in loss of application service. We may also want to consider that some backends are touched by collaterals requirements that are not necessarily driven by a unique backend. The best example for such situation would be alternate business process, not accessing the backend by the front end but connecting directly to the backend and creating a parallel business process. In such situation, you would mitigate the loss of service by separating the backend from the front end.

  2. Pingback: What to get ready in your vSphere Cluster design – PART 2 | A datacenter journey

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s