vSAN and vmKernel

ImageEveryone blogged about! I figured I should too… if you know what I mean LOL.

This is not intended to be technical. Think about this blog as thoughts sharing. I will give myself some time to evaluate vSAN in my own terms, but right now there is some features that I haven’t detected yet in vSAN.

Frankly, there’s tons of information on the topic all over the net. Interesting enough, they all note that, vSAN “like” “converged infrastructures”, are in a very early stages of market demand, and they all note that this is a very specific scenario requirement.

http://www.computerweekly.com/news/2240166057/VMware-Virtual-SAN-vision-to-disrupt-storage-paradigm

http://www.networkcomputing.com/storage-networking-management/vmware-vsan-intriguing-but-a-ways-off/240161008

http://www.yellow-bricks.com/2013/09/09/vmware-vsphere-virtual-san-design-considerations/

http://blogs.vmware.com/vsphere/2013/10/new-virtual-san-vsan-white-paper-available.html

http://cormachogan.com/vsan/

Although vSAN in essence is not a complete redefinition of what the use of local storage is, there is a fundamental difference in the way local storage utilization is done and how the underlying storage capacity should be seen.

VSA

Vmware had detected early that for some very specific needs, the utilization of local storage investment should not be left aside. Personally I was always put in a position, when designing very specific VMware farms, where I preferred SD cards to traditional internal HDD, simply because of the loss of the HDD investment and the VMFS formatting.

Cisco and HP have captured this situation in the market and VMware strongly supported the usage of SD cards instead of provisioning platforms with large quantity of local storage; losing a 1TB drive for 400MB of “OS” is not something we like to talk about… don’t you think?

So VSA was for me a sweet talk when very specific needs were addressed and beside the fact that many limitations in the way data is managed compared to traditional SAN was existing, VSA was a good option, although the price of it was kind of challenging.

Agreed, the aggregation of local storage through VSA was basic, and needed to have a separate virtual appliance managing the IO request; you needed to be very specific when configuring the VSA and the configuration was purposed built for a very specific scenario. But it worked. And worked well in very small production environments.

The biggest lack of the product was its understanding of the workloads. Aggregating capacity in a “Network Raid like” architecture was in essential revolutionary, and I know few customers that it truly helped building a business case for a larger data centre investment. However you needed to carefully manage that storage and shouldn’t of expect much out of it in a traditional way we look at storage.

vSAN

I will reiterate the message: vSAN is NOT A VSA

First, because it is baked in the vCenter and does not need a Virtual appliance to manage the underlying capacity and Second because it has SSD support.

Now among other things, vSAN, the dedication of a vSAN VMKernel, is probably what differentiate vSAN from VSA and others, in a sense that when you dedicate resource for local storage traffic you’re really committing your investment to storage traffic.

There is obvious advantages to dedicate local storage traffic to a dedicated kernel. The first one is focus. The solely purpose of that configuration to is focus on storage replication, and inter-host connections at the storage level. Much like an iScsi kernel port does for iScsi connection based, a vSAN kernel port has a single job: manage the inter-host storage traffic.

While not all features of a traditional SAN are not yet in place in a vSAN, remember that it is the version 1.0 and I hope that vmware will not rest until it has a fully competitive offering to Nutanix.

I hope my last phrase didn’t sound too negative about Nutanix. Believe me I enjoyed the Nutanix courses and training while I suffered not seeing a true response from Vmware. vSAN come in to appease my feelings. While there is tremendous advantages in a Nutanix system (The one I love is the one where NOS reallocates the virtual disk where the virtual machine is getting computed for better service of the VM), it remains a very new way to look at “convergence of storage and compute”; aside from the geek in me that loves it, there’s a business value that still needs to be defined and this is where Nutanix is focussing its energies: Not the technology, but the “marketability” of the solution. In a stretched cluster scenario, we need to be careful how we plan our architectures, is the intend is to provide redundancy across regional sites.

So to recap, vSAN aggregates local storage and creates distributed datastores out of it; Virtual SAN uses local attached storage to create a cost effective high available and high performance shared storage for VM’s. SSD drives are used for read cache and write buffer. Spinning HDD drives are used for storage of virtual disk files (VMDK) , snapshots and swap.. It is embedded in the ESXi 5.5 hypervisor. It must have at least 1 SSD for read cache and write buffer. It also needs at least one spinning disk HDD for storage of virtual machine files (VMDK), swap and snapshots. It does not offer the rich features of Nutanix like de-dupe, compression, DR out of the box, multiple tiers for storage, auto tiering etc.

Writes are written to two SSD drives which are located in two different ESXi hosts. Data is buffered in SSD and later written to HDD.

Virtual SAN provides:

▪  Automated Storage Management via Intelligent VM based policies

▪ Dynamic scalability to grow storage and compute dynamically on an as-needed basis

▪ Integrated with vSphere and managed in vCenter

▪ Built-in resiliency with protection from multiple hardware failures Per-VM with SLA management via intelligent data placement

▪ Instant storage provisioning without complex workflows

vmKernel

What I like the most about vSAN is, again, the vmkernel. In a VMware deployment, I appreciate this feature (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2058368 )  ; (http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-8408319D-CA53-4241-A3E4-70057F70030F.html ) ;

What I also appreciate in such “storage-compute convergence” is the approach to “Storage Area Network” and costs associated to it. Typically you never want to have your iScsi traffic passing on the same Layer2 than your Network traffic. Call me paranoid, but it is what it is. I know, I know, today’s top notch Later2 and 3’s would allow such architecture to be present, and I have done it. But it requires so much elements to be present at the same timer and such powerful Layer2 technology (Cisco would be my preferred one, but often there’s others…) that often I revert to dedicating 2 2960’s for iScsi traffic. And remember we’re not talking today about infrastructure convergence nor UCS… LOL

“Store-Compute Convergence” are shading away this requirement. Is this a good thing? Well it’s not a bad thing, let’s put it this way. It’s different and such approach requires some thinking, which we ALL love to do.

Now it does not “cut” completely the need of Layer2. Switches are still needed, and you still need to more precisely understand how the traffic will be carried over from one host to another. Will you be leveraging your existing Layer2 infrastructure? Will you be dedicating a Layer2 infrastructure to the requirement? But one element is sure: the high IO transaction typically seen  in traditional Array architecture will be, or could be, local to the host only and will surely benefit of not having to deal with TCP handshakes over a congested wire (in some scenarios). **Yes, IO cards allows a better placement of blocks, but this is not the topic of this blog.

Having a dedicated vmkernel, frees up cycles from the hosts compare to a virtual appliance and I believe that this is where the biggest difference is. The second one would be how the high IO transactions are getting computed and addressed. Locally is far better that remotely when block of data are concerned.

In a nutshell, I love options. I love to have the choice. I love to be able to connect the business needs with the best technologies and there’s a new kid in town. I love to discuss avenues of success and I truly love uncovering the underlaying reasons of decisions and directions, both business and technologies.

So why should you be looking at vSAN?

I would regroup my answer under the question: “Why should you be looking at “Compute & Storage convergence”?

While this is a very new approach to look at the datacenter, and giants like Google, Amazon and others have these type of solution enabled, we are not all Google.

In a “Software Defined of Things” era, vSAN fulfils a need where budgets and capacities/performances are typically not following one another.

We all love linear performances, and vSAN much like Nutanix offers those linear performances.

This clearly sends a message of SDDC and allows it to be far more compelling in a data centre virtualization strategy where workloads can be moved from one source to a destination transparently and efficiently to leverage the best storage destination for the need. Moreover, it allows to purposely address specific workloads.

Growing SAN is not always the best move; having options is surely the best approach for the clients.

Advertisements

About florenttastet

As an IT professional and leader, my objective is to help an organization grow its IT department with new and innovative technologies in order to have production at the most efficient level ensuring the right alignment in the deployment of such technologies through a precise Professional Services results in a extraordinary experience for the customer. Team member of multiple projects, I have developed a strict work ethic allowing development of superior communication skills, as well as the ability to multi-task and meet precise deadlines. As an IT veteran with a broad background in consulting, management, strategy, sales and business development, I have developed an deep expertise in virtulization using VMware and Citrix products with a strong skillset on Storage Arrays (HP, EMC, Netapp, Nimble & IBM). I have also developed a Security practice through CheckPoints NGX R65-R66 (CCSA obtained) and Cisco PIX-ASA product line. Specialties: Microsoft infrastructure products; Monitoring HPOV, SCOM, CiscoWorks, Firewalls: checkpoint, PIX and ASA. Virtualization with VMware (ESX through vSphere & View/Horizon), Microsoft (Hyper-V server, VDI and App-V), Citrix (Xenserver, XenDesktop, Xenapp), Storage (EMC, HP, Netapp, Nimble & IBM), Reference Architectures and Converged Datacenters (vSPEX, Flexpod, vBlock, PureFlex & HP Matrix)
This entry was posted in Cloud, Converged infrastructures, Datacenter, SDDC, SDS, Server virtualization, Storage virtualization, vmware, vSAN. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s