Are you functional or Optimal? 

vSOMI still remember to this day seating in a room full of executives and listening to a presentation where the status of a datacenter refresh didn’t make much sense to me. It was in 2o09, 5 years ago already, and the project was surely not optimal.

When you are part of a 2 millions project, a question you don’t want to answer is: “is our investment to this date optimal?” But more important you don’t want to hear an answer like: “we are functional”.  Felt like a disconnect between the business and the technology.

I’m a huge supporter of proactive intervention and knowing what’s going on under the hood. I believe that understanding where your performances are, in fact shows where you stand and what you need to do to reach the level you’ve expected to reach or making sure you’re setup the way you deserve to be setup.


In a virtual environnement, measuring the performances can be tricky and you’ll need to ensure you really understand what is going on. It’s not enough to have the right amount of IOPS, or controller buffer or disks or connectivity. Each block requires its own attention, and this complex architecture requires to be optimal. Functional is not an option.

If you reach the level where you’ll need to assess a project from a technical standpoint and understand where you really are, you’ll need to rely on tools like IO Analyzer.

VMware I/O Analyzer, from the vmware labs, is an integrated framework designed to measure storage performance in a virtual environment and to help diagnose storage performance concerns. I/O Analyzer, an easy-to-deploy virtual appliance, automates storage performance analysis. I/O Analyzer can use Iometer to generate synthetic I/O loads or a trace replay tool to deploy real application workloads. 

It is form far the best tool I have ever used, due to its simplicity an efficiency at providing you with the right information that can be use to improve an overall design. But more important, it is a tool the manufacturer and leader of Software Defined Datacenter is providing and not something we should overlook.

This will get you running and get you into an intelligent conversation where numbers speaks and results are showing where precisely the project stands.

As far as measuring the overall performance of a virtual infrastructure from the compute side of things, many are looking at vSOM. While I trust the tool, if integrated in an existing environment, the tool will only report on what is currently running and it might be difficult to understand what is under performing. You’ll need a strong level of knowledge in virtual environment operation to adjust the thresholds to where they will speak to you and your lines of business.

AN interesting metric from vSOM is the capacity report or “health check” that I always particularly like. It provides insights on Best Practices that should be implemented. You will have to read the report and take into consideration your own environment or course, but essentially it will provide you with Best Practices, and we all love Best Practices, as they defined the foundation of the environment and bring a ground work where you can consider your environment optimal.

Always refer to the following for Compute Performances Best Practices:


It may be a question of planning, design, architecture or simply calculation. Regardless of the reasons you are where you are, the fundamental basics of storage will have a huge impact on your designs, and will translate in a disconnect between the business and the technology decisions when assessing the project.

Design is a sweet balance between performance and capacity and it has a large impact in virtual environnements. Design choice is usually a question of:

How many IOPs can I achieve with a given number of disks?
• Total Raw IOPS = Disk IOPS * Number of disks
• Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)

How many disks are required to achieve a required IOPS value?
• Disks Required = ((Read IOPS) + (Write IOPS*Raid Penalty))/ Disk IOPS

You may also want to refer to the following URLs for Storage based Best practices:

Getting ready for Cloud Computing

I understand the various challenges that exist in adopting a new technology. I mean we can innovative… and we can be innovative. I believe that Cloud Computing is essentially a question of culture and definitely a strategy that will be in our every days for the next couple of years.

But you’ll not be able to have that conversation if your internal datacenter is not optimal. Functional will typically translate in failure when moving the cloud. Being optimal will ensure every aspects of your infrastructure will be highly measured and responding properly to workloads once out of your premises.

That you’re looking at Amazon, vCHS, SoftLayer, Azure or any other local cloud providers, they’re all providing the right environment for your workload to be optimally responsive. The only failure that can happen is your workload itself causing disruption in the cloud ecosystem, translating savings into costs.

I have witnessed in a recent project a hybrid cloud approach where non mission-critical was hosted outside the main premises, while the mission critical stayed internally.

To that I say why not


About florenttastet

As an IT professional and leader, my objective is to help an organization grow its IT department with new and innovative technologies in order to have production at the most efficient level ensuring the right alignment in the deployment of such technologies through a precise Professional Services results in a extraordinary experience for the customer. Team member of multiple projects, I have developed a strict work ethic allowing development of superior communication skills, as well as the ability to multi-task and meet precise deadlines. As an IT veteran with a broad background in consulting, management, strategy, sales and business development, I have developed an deep expertise in virtulization using VMware and Citrix products with a strong skillset on Storage Arrays (HP, EMC, Netapp, Nimble & IBM). I have also developed a Security practice through CheckPoints NGX R65-R66 (CCSA obtained) and Cisco PIX-ASA product line. Specialties: Microsoft infrastructure products; Monitoring HPOV, SCOM, CiscoWorks, Firewalls: checkpoint, PIX and ASA. Virtualization with VMware (ESX through vSphere & View/Horizon), Microsoft (Hyper-V server, VDI and App-V), Citrix (Xenserver, XenDesktop, Xenapp), Storage (EMC, HP, Netapp, Nimble & IBM), Reference Architectures and Converged Datacenters (vSPEX, Flexpod, vBlock, PureFlex & HP Matrix)
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s