Much like i posted last week https://florenttastet.wordpress.com/2013/07/27/a-total-silence-im-back/ the industry is looking at every possible angle to increase and support the VDI adoption momentum.
Challenges have fast arise when IOPS came in the pictures. While Server virtualization was fairly “easier” to evaluate, desktop virtualization itself created some road blocks. The aggregation of virtual desktops components caused rapidly an additional “tax” on storages, slowing down the adoption of the future…
Storage manufacturers have been creative, and thank’s to the human nature, they have overcome the challenges and offered to the market top notch technologies such as automated disk tiering and the 5 Tier Storage Model http://wikibon.org/wiki/v/Five_Tier_Storage_Model
What a beautiful technology, but it comes at a cost … creating a new road block: TCO & ROI.
Citrix has made some tactical acquisitions and created solutions to overcome the costs associated by the new IT need and i’ve covered Citrix’s’ offering last week (https://florenttastet.wordpress.com/2013/07/27/a-total-silence-im-back/)
This week, it’s all about vmware.
An indisputable leader
There is no doubts that VMware is leading the datacenter’s virtualization journey. Yes, Microsoft is gaining momentum, and 2012 is a pure proof of that, however, vmware’s strength has always be to renewed itself, push the limits further and has build the knowledge of a true Software Defined DataCenter, where automation plays with user workload and peacefully keeps the IT teams ahead of the game.
All virtualization vendors are closely working with every storage vendor possible. EMC, IBM, HP, Nimble, Netapp to name just a few, have often reinvented themselves to stay on top of the wave and some, have significantly improved their offer.
But gears always equals dollars; software too, to a certain extend… differently. Remember that we are in the “SDDC” or Software Defined DataCenter era: Cisco is in, EMC leads the charge and VMware creates the concept and supports it with new enhanced feature sets.
While we have options when IOPS are in the play and storage arrays the playfield, SSD and Storage tiering are helping us, but without having to reinvent or think about our entire storage designs, how can we address tomorrow’s needs with today’s software features?
At block level
I’m not teaching you anything when i say that every digital content created and therefore stored on any form of disks (from a simple USB key to a disk aggregation in storage arrays), from a single word document to a Virtual machine, is composed of blocks that, once aggregated together, are recreating, on demand, the file you want to use.
When you have a Virtual Machine that has 20GB of disk, how much of that 20GB is truly accessed? How much is “sleeping” while the entire OS is running? The strength of moving only pieces of a file, automatically, from one type of disks (and therefore speed) to another is definitely the future of a peaceful experience for users and on-call system administrators while addressing the highest levels of SLA. Automated Disk tiering, software led, brings a seamless way to move only blocks that are the most frequently accessed to a fast tier of disks.
In the computerized world, a single word document of 10MB is broken into tiny pieces, called blocks, and stored in “individual containers” on disks. When you access that file from your application, the “pieces” are called back from the “individual containers”, aggregated together, and presented to you in the form of the word document you’re looking for through Microsoft Word in our example.
A Virtual machine in the VMware world, is made out of a vmx and a vmdk file (primarly). The vmx host what defines the OS (how much memory, CPU…etc typically very small in size), while the vmdk holds the content of that OS (files, DLL, INI…etc typically very large). In general, only a subset of the files composing an Operating System (OS) are accessed when the OS is been computerized: only some of the DLLs a Windows OS is hosting is generally accessed. The functions of the OS that are not required will not be required, therefore these functions will not call on the files they need to be able to perform the task they’re asked to do.
These files are “dormant” within the OS: not loaded in memory, not processed by the CPU, just there, “waiting” to be called to execute the function they were created for, taking some space out of the vmdk hosting the OS. “Waiting” means that part of the OS, is not been used: so why hosting the vmdk file that host the OS entirely on the same type of disk?
Some functions of an Operating Systems could be used, “from time to time” and require the OS to keep them processed and “dormant” in memory. Others functions are constantly accessed and processed. Storage Tiering reads and define where the blocks the most or the least used should be hosted on which types of disks to ensure effectiveness of the user experience.
So tiering, for a single file or set of blocks, leave the “least” accessed blocks of that file onto, SATA or NL-SAS let’s say, move the ones that are “often” accessed to, let’s say, SAS drives, and migrate the ones that that “frequently” accessed to a SSD tier of disks. It alleviated the nightmares of boot storms and ensured a true best user experience at any time.
What if we don’t have any SSD or the capabilities to have a Storage Tiering option?
Vmware did it again! Hearing some of the challenges their customers were facing, Vmware looked at the situation and evaluated the need and feasibility of working at block level on the hypervisor. When you’re the virtualization hypervisor of choice, it’s “easy”, since you’re handling already the entire datastore and are capable of understanding the blocks you are creating and working with.
In a nutshell, as a host, why wouldn’t i be able to “cache” some of the blocks of a certain file into my own memory…
Well that’s what CBRC offers and disrupted the straight forward strategies of storage arrays offering disk tiering. Of course this was a positive disruption and a true requirement in the industry (http://myvirtualcloud.net/?p=3094)
“The CBRC feature is already baked into vSphere 5 and VMware View administrators are expecting it to be integrated with VMware View in the future. CBRC will help address some of the performance bottlenecks and the increase storage cost for VDI. CBRC is a 100% host-based RAM-Based caching solution that help to reduce read IOs issued to the storage subsystem and thus improves scalability of the storage subsystem while being completely transparent to the guest OS.
The feature has been primarily designed to help with read-intensive I/O storms, such as OS boot and reboot, A/V scans, and administrators should expect to see significant reduction in peak read I/O being issued to the array for these workloads.”
Content Based Read Cache – CBRC, was introduced in View 5.1 and is used to enhance the read performance of virtual desktops. By assigning an area of host memory for cache, and then creating ‘digest’ files for each virtual machine disk, CBRC overcome the “most accessed blocks” challenges experienced in every desktop virtualization deployment. This feature is most useful for shared disks that are read frequently, such as View Composer OS disks. A new world now opens up without worries on performances and storage capacity, which are usually the main two drivers into defining a true balanced storage array solution (Capacity vs IOPS)
i hear you say it’s expensive too. Caching in host memory requires an additional memory capability that might not be present. To all of you i say “yes!” however, it may be faster, to add memory to a host than destroy a current array design (and far cheaper).
i may also bring to you, quickly, Sparse Virtual Disks for the ones that are looking at a Virtual Desktop project and currently brainstorming about it. This anticipated new VMDK format is available in View 5.2. It is the de facto default format for linked clones. Available since vSphere 5.1, View 5.2 was the version introducing this format to this area of virtualization. Basically SE Sparse Disks give View users two features:
- A linked clone which has a 4KB block size akin to standard VMDKs, as opposed to older linked clones which had a grain size of 512 bytes. This smaller grain size did not perform optimally on some arrays due to alignment issues and partial write issues.
- An automated way to shrink the desktop size. This has been a bugbear for some time, as desktops based on linked clones would grow with use and consume disk space, and considerable administrative overhead was incurred to reduce their size.
For additional information on SE Sparse Disks, there is a very good article posted on the vSphere blog here (http://blogs.vmware.com/euc/2013/03/space-efficient-virtual-disks-in-practice.html) covering the SE Sparse Disk.
VMware CBRC & Citrix PVS/MCS
I have often found CBRC is far more easy to implement and get a true block level ease of management. While PVS and MCS are indeed a significant key player in storage management (while they are in fact not truly addressing the same challenge), CBRC, as a fraction of the cost, adds what is needed: a simple, yet highly performant, block level management of all virtual desktop deployments.
You may say, that i have found PVS and MCS efficient in my previous post (https://florenttastet.wordpress.com/2013/07/27/a-total-silence-im-back/) and you would be right to say that, but the fact that PVS and MCS have not yet demonstrated their ease of management and truly addressing both, network performance and storage capacity reduction at the same time, i still see CBRC a cost effective solution, very to enable and far easier to manage.
For a single setting to enable and a few GB of memory to be added (or considered in the design) the value added by the function is not to be ignored, that you have or not, SSD and storage tiering in the back to support it.
If you are in the debate of your virtual desktop solution, choices are available, but all have pros and cons that needs to be deeply evaluated… POC is your way to go at that level. Once you’ve moved forward, the change required to switch technology (and vendor) will be an interesting journey; remember that it is not about the technology but the user experience; keeping that in mind will definitely ensure your success and allow your business to be ahead of the game while having your people happy to come to work and use what you’ve designed”