Audio Recording #325 – vSphere 6.0 upgrade 

Friends,

What a year! Just can’t believe we’re in June already and still haven’t gone around the 6.0 upgrade. Had the opportunity to chat with the folks at VMware about the upgrade surrounding 6.0.

Open conversation, interesting to see the momentum around that version and has we are approaching VMworld2015, my inbox is still full of unanswered questions related to the release 6.0 and still looking at the update paths options. What’s the best? What shoulwe do with PSC? Small questions but together are creating a drastic change in the way we need to look at the infrastructure’so architecture.

Following The talk show in May 2015 https://florenttastet.wordpress.com/2015/05/17/audio-recording-310-vsphere-6-0-with-martin-yip-bloggers-from-vmware-communities-roundtable/ looking at yesterday’s talk show on upgrade related to vSphere 6.0 Upgrade with Corey from the vExpert communities & Bloggers’ From ‘VMware Communities Roundtable’
In case the link doesn’t work : 

http://www.talkshoe.com/resources/talkshoe/images/swf/lastEpisodePlayer.swf?fileUrl=http://recordings.talkshoe.com/TC-19367/TS-980944.mp3

Posted in compute power, Converged infrastructures, Datacenter, Hybrid infrastructures, Monitoring, SDDC, Server virtualization, Storage, Uncategorized, vExpert, vmware | Leave a comment

Audio Recording: ‘#310 – vSphere 6.0 with Martin Yip & Bloggers’ From ‘VMware Communities Roundtable’

Folks,

It has been a busy Spring 2015 this year. Between new vSphere, training, certification and study, the blog hasn’t been very active.

Today, was the opportunity to bring back to the surface the February 2015 Podcast with Eric Nielsen (@ericni25) Audio Recording: ‘#310 – vSphere 6.0 with Martin Yip & Bloggers’ From ‘VMware Communities Roundtable’.

Working on a few posts to come out, stay tune.

Florent Tastet

Posted in Hybrid infrastructures, SDDC, Server virtualization, vExpert | 1 Comment

Hyper-convergence – Part1

HyperConverged Despite all efforts deployed to strengthen storage arrays, iscsci connectivities and platforms, the reality is that nothing runs faster than local access, period!

It’s hard to compare local bus with iscsi latency.

Had we seen coming the big “X”? I believe so, since vSAN was not something new, I like to believe that the industry had planned for it, however where we saw the huge shift, is in the market opportunity and where, i think, leaders were tired to battle IOPS.

We’ll be talking about hyper-converged datacenter today and i’m hoping to be able to achieve array-SAN-Platform Vs Converged in 500 words… hum…

The Traditional architecture

Centralizing data was a few years ago the perfect opportunity to ensure data was served, and saved, efficiently. The model was solid and efficient. Virtualization came in early 2000’s and create a brand new need. Speed was the name of the game, and kept growing, asking manufacturers to reinvent themselves to remain top of mind.

We saw a shift from FC to iSCSI and rapidly data tiering for SATA, SAS and SSD taking place in array’s architectures. The reason? Very simple: speed. VM servers required a dedicated speed and VDI brought a new challenge where each desktop created bottlenecks at every layers, from the disks to the SAN to the servers to the end users experience.

It would be hard to describe how many back flips we’ve done to accommodate the requirements, and despite the most efficient architectures, the challenge remained as infrastructures grew. The costs associated with the growth outweighed the benefits.

The Hyper-Converged architecture

While it may sound very simplistic, nothing goes faster than local speed. When the operating system is installed locally, the experience is at the rendez-vous. VMware brought to life vSAN, a way to address lost-cost datacenter requirements in the absence of traditional array. Having servers sharing their local disk for VM hosting and able to “vmotion” VM from one host to another remained a strategic advantage for organizations.

However vSAN, at the time, had some limitations and offered the opportunity in the “converged and hybrid IT” battle, for new players to change the game.

I do believe Hyper-Converged is a game changer. Yes, it is meant only, at this point, to address a specific requirement had may have some limitations on big VMs, however it addresses a need and that need is well served.

Besides the fact that the secret sauce remains at the software level, the aggregation of local disk capacity and speed, comes in at a very sweet moment, where organizations are looking at understanding the benefits of virtual desktop. The long term shows that remote workers will be privileged for organizations, and providing an efficient, corporate like, environment, will require a transparent experience for users.

In the Datacenter, the growth of virtual desktop, is precisely addressed by a hyper-converged infrastructure. In a tiny datacenter footprint, you are not able to find a “fully like” traditional architecture, where all VM have accessed to a pooled storage and compute technologies.

Conclusion

This is just the PART1 of the blog where we started to see the variation between traditional and hyper-converged architectures. We all know very well the traditional architectures, so I will be covering in the PART2, the hyper-converged architecture and will perform a “deep dive” … in 500 words… or so

Have a good weekend friends.

Posted in Uncategorized | Leave a comment

Are you functional or Optimal? 

vSOMI still remember to this day seating in a room full of executives and listening to a presentation where the status of a datacenter refresh didn’t make much sense to me. It was in 2o09, 5 years ago already, and the project was surely not optimal.

When you are part of a 2 millions project, a question you don’t want to answer is: “is our investment to this date optimal?” But more important you don’t want to hear an answer like: “we are functional”.  Felt like a disconnect between the business and the technology.

I’m a huge supporter of proactive intervention and knowing what’s going on under the hood. I believe that understanding where your performances are, in fact shows where you stand and what you need to do to reach the level you’ve expected to reach or making sure you’re setup the way you deserve to be setup.

Assessing….

In a virtual environnement, measuring the performances can be tricky and you’ll need to ensure you really understand what is going on. It’s not enough to have the right amount of IOPS, or controller buffer or disks or connectivity. Each block requires its own attention, and this complex architecture requires to be optimal. Functional is not an option.

If you reach the level where you’ll need to assess a project from a technical standpoint and understand where you really are, you’ll need to rely on tools like IO Analyzer.

VMware I/O Analyzer, from the vmware labs, is an integrated framework designed to measure storage performance in a virtual environment and to help diagnose storage performance concerns. I/O Analyzer, an easy-to-deploy virtual appliance, automates storage performance analysis. I/O Analyzer can use Iometer to generate synthetic I/O loads or a trace replay tool to deploy real application workloads. 

It is form far the best tool I have ever used, due to its simplicity an efficiency at providing you with the right information that can be use to improve an overall design. But more important, it is a tool the manufacturer and leader of Software Defined Datacenter is providing and not something we should overlook.

This will get you running and get you into an intelligent conversation where numbers speaks and results are showing where precisely the project stands.

As far as measuring the overall performance of a virtual infrastructure from the compute side of things, many are looking at vSOM. While I trust the tool, if integrated in an existing environment, the tool will only report on what is currently running and it might be difficult to understand what is under performing. You’ll need a strong level of knowledge in virtual environment operation to adjust the thresholds to where they will speak to you and your lines of business.

AN interesting metric from vSOM is the capacity report or “health check” that I always particularly like. It provides insights on Best Practices that should be implemented. You will have to read the report and take into consideration your own environment or course, but essentially it will provide you with Best Practices, and we all love Best Practices, as they defined the foundation of the environment and bring a ground work where you can consider your environment optimal.

Always refer to the following for Compute Performances Best Practices:

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-monitoring-performance-guide.pdf

http://communities.vmware.com/servlet/JiveServlet/downloadBody/23094-102-2-30667/vsphere5x-perfts-vcops.pdf

Designing

It may be a question of planning, design, architecture or simply calculation. Regardless of the reasons you are where you are, the fundamental basics of storage will have a huge impact on your designs, and will translate in a disconnect between the business and the technology decisions when assessing the project.

Design is a sweet balance between performance and capacity and it has a large impact in virtual environnements. Design choice is usually a question of:

How many IOPs can I achieve with a given number of disks?
• Total Raw IOPS = Disk IOPS * Number of disks
• Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)

How many disks are required to achieve a required IOPS value?
• Disks Required = ((Read IOPS) + (Write IOPS*Raid Penalty))/ Disk IOPS

You may also want to refer to the following URLs for Storage based Best practices:

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-monitoring-performance-guide.pdf

http://communities.vmware.com/servlet/JiveServlet/downloadBody/23094-102-2-30667/vsphere5x-perfts-vcops.pdf

Getting ready for Cloud Computing

I understand the various challenges that exist in adopting a new technology. I mean we can innovative… and we can be innovative. I believe that Cloud Computing is essentially a question of culture and definitely a strategy that will be in our every days for the next couple of years.

But you’ll not be able to have that conversation if your internal datacenter is not optimal. Functional will typically translate in failure when moving the cloud. Being optimal will ensure every aspects of your infrastructure will be highly measured and responding properly to workloads once out of your premises.

That you’re looking at Amazon, vCHS, SoftLayer, Azure or any other local cloud providers, they’re all providing the right environment for your workload to be optimally responsive. The only failure that can happen is your workload itself causing disruption in the cloud ecosystem, translating savings into costs.

I have witnessed in a recent project a hybrid cloud approach where non mission-critical was hosted outside the main premises, while the mission critical stayed internally.

To that I say why not

Posted in Uncategorized | Leave a comment

Yesterday was siloed. Today is integrated.

scientrix-epm-illustration-seamless-integrationI received a question lately from an acquaintance asking a plain simple question: “what do I need to create a cloud?”.

Behind this simple question lays a constellation of answers and in fact, far more business related questions than simply aligning a list of items. Remember that “cloud” equals “service” and therefore the underlying components should not be a focus area.

When you look at building a “cloud like” datacenter, two avenues exist: converged and hyper-converged.

You always need to define your unique needs when it comes to services you want to deliver to an organization and typically it gets defined by the objectives expressed at the corporate and Lines of Business levels.

An evolution

It has been an amazing journey in the datacenter. Driven by cost reduction we have witness the evolution of the typical silo approach where dedicated resources focus on making shine individually the storage, network and compute layers to an hypervisor based convergence, where service level drive the consumers (a.k.a users, clients…etc) experience, abstracting in the process the backend infrastructure.

While the hypervisor based convergence led the front-end strategies for many years, quickly the industry realized that the backend required a review to follow with increased SLA and highly available applications, and so entered the so well-known “reference architectures” and “Converged datacenter”.

Having a backend hardware precisely aligned , making sure that every aspects are perfectly working together and are supported by manufacturers, enabled the front-end to benefit of highly reliable foundations and enhanced the services delivered to the consumers community.

Still, the needs to increase time to markets required manufacturers an innovative approach. If you haven’t heard about it, it’s called “hyper-converged”.

Converged infrastructure

There are two approaches to building a converged infrastructure. The building-block approach or the reference architecture.  The first one, building-block approach (VCE, HP) environment, involves fully configured systems — including servers, storage, networking and virtualization layer — that are installed in a large chassis as a single building block.

The infrastructure is expanded by adding additional building blocks.

While one of the main arguments in favour of a converged infrastructure is that it comes pre-configured, it also plays against it

From an architectural point of view, all aspects are pre-configured and allow a sweet and easy roll-out. If your needs differs from the proposed predefined solution, you’re essentially out of luck. And so it applies to the components themselves. Each are selected  and configured by the manufacturer and the options to select a different component would either not be supported or simply not functional.

The last aspect is the patching. The building-block approach forces the deployment of updates to the vendor’s timetable, rather than the user’s.

It is possible to build a converged infrastructure without using the building block approach. The second approach is using a reference architecture, such as vSPEX or FlexPod, which allows the company to use existing hardware (or new), to build the equivalent of a pre-configured Converged system (you still need to remain aligned on Reference architectures provided by the manufacturers in concern).

Hyper-Converged infrastructure.  

In a non-converged architecture, physical servers run a hypervisor, which manages virtual machines (VMs) created on that server. The data storage for those physical and virtual machines is provided by direct attached storage (DAS), network attached storage (NAS) or a storage area network (SAN).

In a converged architecture, the storage is attached “directly” to the physical servers. A precise capacity alignment between flash (SSD), SAS and NL-SAS or SATA is leveraged to service the virtual machines to the best of their needs.

On the flip side, The hyper-converged infrastructure has the storage controller function running as a service on each node in the cluster to improve scalability and resilience. Even VMware is getting into it, which was interesting to see this giant trying to catchup in a market they had not created. The company’s new reference architecture, called EVO is a hyper-converged offering designed to compete with companies such as Nutanix, SimpliVity or NIMBOXX.

The two systems by VMware, EVO:RAIL and EVO:RACK, were announced at VMworld 2014 in August. Interesting alignment as previously, VMware was active only in the converged infrastructure market with the VCE partnership.

If we look at Nutanix, the storage logic controller, which normally is part of SAN hardware, becomes a software service attached to each VM at the hypervisor level. The software defined storage takes all of the local storage across the cluster and configures it as a single storage pool.

Data that needs to be kept local for the fastest response could be stored locally, while data that is used less frequently can be stored on one of the servers that might have spare capacity.

Conclusion

As we move to a highly automated datacenter, or “Software defined Datacenter” the datacenter solutions are less and less complex to manage which pleases buyers and decision makers.

If you’re in a tech refresh it might be time to evaluate your options and ensure the next phase of your strategy relies on best of bread approach that will sustain your next 3-5 years.

Now, Converged or Hyper-Converged? The decision has never been tougher to take. Should you stay in the traditional approach? Maybe, unless your operational expenditure are shrinking.

The bottom line is to evaluate alternatives that are allowing an ease of management, because at the end of the day we surely don’t want valuable resources and talented people hands on hardware while the business needs are urging for focus on their revenue increase and tiny time to market.

Posted in Uncategorized | Leave a comment

What to get ready in your vSphere Cluster Design – Part 3

14669392603_e6129c5b3c  VMWorld was surely fulfilled with lots of announcements. The initial feedbacks are good and I believe the best is up to come.

I am still looking at some of the announcements, especially on the storage side of things and the EVORail that seems pretty compelling for what has been lately called “hyper converged infrastructures”.

I’ll drive a blog on to the topic once I complete all reviews. It will surely allow us to exchange a little on the new technologies enhancements in the Datacenter Operating System a.k.a virtualization.

For the moment, I’ll complete the series of 3 blogs started a few weeks ago entitled “what to get ready in your vsphere cluster design – Part 1” and “What to get ready in your vsphere cluster design – Part 2

Permanent Device Loss PreventionHA HeartBeatDPM & WoL and vMotion bandwidth.

As we build stronger a more reliable vSphere clusters, many aspects need to be deeply understood. And when I say deeply understood I don’t mean only knowing what it does and how it does it, but know how it will sustain the objectives set, because this is really why acquisitions are made in the first place, right? Right!

Permanent Device Loss Prevention

vSphere 5 introduced Permanent Device Loss (PDL) which improved how loss of individual storage devices was handled by All Paths Down (APD) on the storage side by providing a much more granular understanding of the condition, and provide a reaction to the condition that best fits the actual condition experienced. Wow that was a mouth full… LOL

When a storage device becomes permanently unavailable to your hosts a PDL condition is triggered through SCSI command: “Target Unavailable”  and it could be for either device that is unintentionally removed, or its unique ID changes, or when the device experiences an unrecoverable hardware error)

When deploying a cluster environment, make sure to enable permanent device loss detection on all hosts in the cluster to ensure a VM is killed during a PDL condition. You may also want to enable HA advanced setting “das.maskCleanShutdownEnabled” to make sure a killed VM will be restarted on another host after a PDL condition occurred. Think about transactional VMs for example….

HA HeartBeat … in Metro-Clusters

One of the fundamentals of cluster is Heartbeat as it keeps the state of the nodes updated. This HeartBeat is enabled, for a new host joining an existing cluster, by uploading into the host an agent and allow all agents on all hosts to communicate with eachother every 1 second. 15-seconds missed heartbeat and the host is considered down.

Locally it’s easy to manage. For dispersed architectures a little more challenging (from a vmware standpoint of course i.e vPLEX…). Latencies are playing against some of the best practices, and therefore often the data stores heartbeat is leveraged to alleviate the challenging networking conditions.

For metro-cluster (geographically dispersed clusters) I have always made sure that the number of vSphere HA heartbeat datastores is set to minimum four as some of the most influent bloggers have over and over evangelized.  Manually select site local datastores, two for each site, to maintain heartbeating even when sites are isolated.

DPM & WoL (Wake-on-Lan)

We all know DPM (Distributed Power Management) http://www.vmware.com/files/pdf/Distributed-Power-Management-vSphere.pdf , yet few are using it primarily because today’s majority of datacenters (and I am not talking about Cloud providers here) are not yet concerned by the latest legislation around power consumption (but it will change) in Canada.

A few conditions are required though. First Each host’s vMotion, networking link must be working correctly (obvious no?). The vMotion network should also be a single IP subnet, not multiple subnets separated by routers (layer2 guys, Layer2.)…

Maybe obvious but too often overlooked, the vMotion NIC on each host must support WOL. Very important as well, the switch port that each WOL-supporting vMotion NIC is plugged into should be set to auto negotiate the link speed, and not set to a fixed speed (for example, 1000 Mb/s). Many NICs support WOL only if they can switch to 100 Mb/s or less when the host is powered off… Keep it in mind.

Overall, if using DPM and WoL remember that hosts are contacted on their **vMotion interfaces** so the NICs associated with vMotion must support WoL and must be part of the same layer 2 domain.

vMotion bandwidth

I have that question often from many many many many many people. What should I consider in my vMotion bandwidth. We have a tendency to go to what’s bigger is better. When you have that option, it surely won’t hurt, but the requirement behind 10Gbps are far more than the speed; the wire, the NICs involved and the L2 switches are to be considered which usually drives costs up when in fact the requirements are not there.

Yes, bandwidth is highly important and we should pay a close attention to it. By providing enough bandwidth, the cluster can reach a balanced state more quickly, resulting in better resource allocation (performance) for the VMs therefore providing a better VM density returning a high ROI on hardware investments.

However, before jumping on the big boy, consider link aggregate and the overall cluster design. My take on that topic is that if you forecast adequately the cluster at the hardware level and ensure that all hosts are well architected, and every VM has the *appropriate amount of resources it requires to function* vMotion bandwidth will smoothly fit in 1Gbps network speed.

So it is important, but it should not be used to alleviate some architecture disfunction; vMotion is cool, but moving transactional VMs around too often is not suggested. Having planned the cluster appropriately should be targeted and IF REQUIRED, vMotion leveraged for maintenance requirement, manual cluster balance et DRS requirements but should not be considered as an operational tool that VM should depend on. It is a “feature” not a “fonction”.

Don’t forget, a VM will only best performed once well configured, on a well equipped host and when it’s the least “moved around”.

Conclusion

This blog concludes the series of 3 blogs about “What to get ready in your vSphere Cluster Design”. Little details here and there will make the difference overall in the budget you plan for your next fiscal year. I believe in providing the right information to the decision makers to help them maximize the return on investments made originally at the acquisition point of the virtualization technology.

I keep stressing the fact that vCOPS should ALWAYS be considered, and should be implemented quickly. If you’re looking for insights in your environments it’s a must and rather than planning on new capex, focus on a higher conversation and think like an investor.

If you paid $X dollars in the past for something you wanted, chances are you would want to squeeze every bit of it before considering upgrading or changing. I often see too many unbalanced clusters because some of the fundamentals are poorly implemented or wrongly leveraged.

Rely on someone you trust to help you design and brainstorm your environment. The more the better since the winner at the end would be your organization, I strongly believe that it is a fundamental exercise that we should ALL do when comes the time to evaluate where should we next be investing.

I hope this helped a little, and I am looking forward to all feedbacks.

Happy weekend friends!

Posted in Uncategorized | Leave a comment

What to get ready in your vSphere Cluster design – PART 2

technician-becoming-stressed-over-servers-in-data-center

In PART 1 https://florenttastet.wordpress.com/2014/08/10/what-to-get-ready-in-your-vsphere-cluster-design-part-1/ of this series fo blogs we’ve reviewed some vSphere best practices and I want to thank everyone that shared feedback.

All feedback are greatly appreciated and welcome. They are a source of inspiration and allows a wider view of the agility of vmware.

Last week’s blog, revealed how much DRS is crucial fonction. It’s definitely a “must have” for many organizations that are in need of a complete organic, self adjustable, agile and somewhere “self healing” option to be deeply considered. Through some of his underlying features, this fonction enables automated resource consumption and VM placements that best meets a cluster needs and surpasses resource assignments.

I like to call it “having an eye inside”; being able to move virtual workload to the best of its needs no matter when, how and why without disrupting daily operations.

Today we’ll see in this PART 2, SRM over vMSC, AutoDeploy, Restart Priority and Management Network under a 1000 words (or so LOL)

SRM over vMSC

First vMSC and SRM are not the same and were not designed in the same way. They are both concerned by disasters but in a different way: disaster avoidance (vMSC) versus disaster recovery (SRM). 

VSphere Metro Storage Cluster (vMSC) allows vMotion workloads between two locations to proactively prevent downtimes. When you’re aware of a service interruption in a primary datacenter vMSC will allow you to move your virtual workload from your primary site to the secondary site. You need to be aware of the distance limitations of such activity. vMSC will only permit this vMotion under the following context:

  • Some form of supported synchronous active/active storage architecture
  • Stretched Layer 2 connectivity
  • 622Mbps bandwidth (minimum) between sites
  • Less than 5 ms latency between sites (10 ms with vSphere 5 Enterprise Plus/Metro vMotion)
  • A single vCenter Server instance

Some of the Pros of vMSC are the possibility of non-disruptive workload migration (disaster avoidance), no need to deal with issues changing IP addresses, potential for running active/active data centers and more easily balancing workloads between them, typically a near-zero RPO with RTO of minutes and it only requires a single vCenter Server instance (for cost conscious decisions).

SRM focuses on automating the recovery process of workloads that unexpectedly fail. Once you inject SRM in your infrastructure, a copy occurs between a storage source and destination and a piece of software contains the restart order of the VM. Note that I said “restart”. It means that at a certain moment there will be loss of service, the time for SRM to restart the VMs affected (and protected by SRM). The requirements for a SRM architectures are:

  • Some form of supported storage replication (synchronous or asynchronous)
  • Layer 3 connectivity
  • No minimum inter-site bandwidth requirements (driven by SLA/RPO/RTO)
  • No maximum latency between sites (driven by SLA/RPO/RTO)
  • At least two vCenter Server instances

SRM has advantages that should not be ignored. It can define startup orders (with prerequisites), there is no need for stretched Layer 2 connectivity (but supported) and it has the ability to simulate workload mobility without affecting production; finally SRM supports multiple vCenter Server instances (including in Linked Mode)

Choosing carefully will lead to success as always; consider what it can’t do instead of what he can do alleviates surprises but first and foremost, ensure you understand the business objectives of such requirement as they fundamentally do not address the same needs. Look for your RTO and RPO http://en.wikipedia.org/wiki/Recovery_point_objective (or ask the questions) and the answers will provide strong guidances. 

In short, if very high availability and/or non-disruptive VM migration between datacenters is required use vMSC, otherwise leverage SRM. Involve your Storage infrastructure in the discussions. vMSC is heavily relying on storage arrays manufacturers, so you’ll need to consider their capacities and limitations. 

Such requirements is not needed in SRM (i.e storage capabilities).

AutoDeploy

Know Norton Ghost? Then welcome to the 21st century of automated image deployment. Typically used in large environments to cut the overhead, vSphere Auto Deploy can provision multiple physical hosts with containing ESXi. 

Centrally store a standardized image for your host and apply it to newly added servers. Leveraging Host Profiles will ensure the image deploy complies with all fonctions and features configured on the rest of the hosts in the cluster and provide a standardized cluster

When a physical host set up for Auto Deploy is turned on, Auto Deploy uses a PXE boot infrastructure along with vSphere host profiles to provision and customize that host.

Now equipped with an iterated GUI it stores the information for the ESXi hosts to be provisioned in different locations. Information about the location of image profiles and host profiles is initially specified in the rules that map machines to image profiles and host profiles. When a host boots for the first time, vCenter creates a corresponding host object and stores the information in the database. https://labs.vmware.com/flings/autodeploygui 

Unless you are passionate about manual installation, AutoDeploy cuts the time it take to manually setup your hosts, configure then can apply a policy ensuring the host will comply with the rest of the cluster. When you’re not doing a provisioning task daily errors can occurs; autoDeploy stores the configurations and applies them on demand. 

Remember to always update the “referenced” image if you change something on the hosts in the cluster; it will minimizes the troubleshooting and consider using the hardware Asset Tag to group the ESXi Hosts to limit the number of rule set patterns and ease administration.

Restart Priority

Ensuring the backend from first then the front end is an art. When backends are databases, they need to be first back online before the front end can communicate adequately with them.

Part of HA architecture, the restart priority of a VM or service ensures that the targeted services or VM are coming online in an orchestrated manner. 

Configuring restart priority of a VM is not a guarantee that VMs will actually be restarted in this order. You’ll need to ensure proper operational procedures are in place for restarting services or VMs in the appropriate order in the event of a failure.

In case of host failure, virtual machines are restarted sequentially on new hosts, with the highest priority virtual machines first and continuing to those with lower priority until all virtual machines are restarted or no more cluster resources are available. 

The values for this setting are: Disabled, Low, Medium (the default), and High. 

If Disabled is selected, VMware HA is disabled for the virtual machine. The Disabled setting does not affect virtual machine monitoring, which means that if a virtual machine fails on a host that is functioning properly, that virtual machine is reset on that same host.

From the book: 

The restart priority settings for virtual machines vary depending on user needs. VMware recommends assigning higher restart priority to the virtual machines that provide the most important services.

■ High. Database servers that will provide data for applications.

■ Medium. Application servers that consume data in the database and provide results on web pages.

■ Low. Web servers that receive user requests, pass queries to application servers, and return results to users.

Beware that If the number of hosts failures exceeds what admission control permits, the virtual machines with lower priority might not be restarted until more resources become available.

Management Network

To start, for the “Management Network” portgroup it is a best practice to combine different physical NICs connected to different physical switches to simply increase resiliency. It seems simple and obvious and too often do I see this not being proper. 

If the Management Network is lost any kind of traffic, outside the VM traffic and vMotion will be affected. Traffic between an ESXi host and any external management software is transmitted through an Ethernet network adapter on the host; that Ethernet adapter becomes your main concern and should be addressed with much caution. 

Examples of external management software include the vSphere Client, vCenter Server, and SNMP client. Surely we don’t want to lose the functionality of the vSphere client, especially in stressful moments such as a disaster situation;

Yes of course you can still jump in command line and interact with all hosts, but remember that in environments with more than 5 hosts this becomes a struggle and most of the policies are managed by the vCenter anyway, so what good does command line does in emergencies where RTO are tiny?

During the autoconfiguration phase, the ESXi host chooses vmnic0 for management traffic. You can override the default choice by manually choosing the network adapter that carries management traffic for the host. In some cases, you might want to use a Gigabit Ethernet network adapter for your management traffic. Another way to help ensure availability is to select multiple network adapters. Using multiple network adapters enables load balancing and failover capabilities

My advises on this point is not to under consider this Management Network. Carefully plan for the worst and ensure continuity of service of yourself and the cluster in general.

Conclusion

From the field, SRM, vMSC, AutoDeploy, Restart Priority and Management Network are critical points to consider at the same level than DRS or host affinity and Resource Pools but they seat at the lower layer of the infrastructure and are supporting the entire architecture.

In general, simplifying your operations will require a form of image automation for the hosts. Add it and let AutoDeploy push the image; make sure you support PXE boot on your network of course.Once deployed the image needs to be managed and for such management to be safe a reliable I strongly suggest to ensure the network management will not diminish the work you’ve put on the cluster architecture. Double up the connectivities… just in case.

Next blog will cover Permanent Device Loss Prevention, HA HeartBeat, DPM & WoL and vMotion bandwidth.

Happy weekend folks!

Posted in Uncategorized | 1 Comment