A 2 part reflexion supporting our next presentation to the vForum in Quebec on April 9th. Often asked, never fully answered, or I should answered but not completely covered at the core.
Today: what’s the challenge..
So what’s the big deal?
Let’s but on the side the business for once, and the investment on the software, what’s left? The user experience. Yes, again… the 21st century is user centric, so let’s always keep this in our minds.
How do you get prepare for something like that? How can you predict such a shift? Unless you are in the secrets of the Gods, it’s fairly difficult. No one could forecast the adoption momentum and the need for more… with less! Especially NOT on the enduser side.
The problem we’re facing today is that our storage, for most of us, aren’t ready for “that“ type of workload.
The workload of a virtual desktop is far different from the workload of a server, in many aspects. The first aspect is that each and everyone of us is fundamentally different and it impacts the way we’re working. The business doesn’t care about HOW the job is done, but focusses on WHEN it will be delivered. THAT makes us unique in today’s world and how we work and generate outcome; it directly impacts the way we are delivering our work, when and how. What’s important is the result.
Virtual desktop will allow organization to provide a safe, controlled, and managed virtual desktop to its users regardless of where they are and when they wish to work, but where is all “this” been hosted?
You got it! In the datacenter, on an array that we have designed 3-5 years ago and it needs to address today’s expectations like it’s normal workload.
What?… yep that’s right, use yesterday’s technology for today’s workload. Sweet isn’t it?
The way that I am calculating average workload is as follow (from a storage perspective only): a typical virtual desktop (all vendors included) will need between 12-18 IOPS to run in normal operation. That is to sustain a normal Windows (7) desktop operating system. Now, how many users do you have in your organization? 50? 100? so to sustain the bare minimum, you’ll need somewhere between 600 and 900 IOPS for 50 users and between 1200 and 1800 IOPS to sustain 100 users. Windows 7 is what, 20-40GB? Now put that on RAID5 SAS 15K you’re looking for 50 users at:
- Capacity required: 50*20GB = 1000GB to 50*40GB = 2000GB
- IOPS required: 600 to 900
Capacity Vs performance…. 3-4 SAS at the bare minimum…
so 6 disks overall for P & HS. Do you know the cost /TB of SAS disks?…. and if you want to try it on SATA or NL-SAS, good luck.
Are we not killing the ROI on this? I think so.
These numbers are theoretical of course, but not far from the reality. And that is ONLY for a normal operation, meaning “i’m not doing much with my virtual desktop” kind of scenario. Without investment it will not be possible… unless we get smarter
Yesterday’s needs for today’s requirements.
To sustain today’s workload with yesterday’s technology, we need to rethink our datacenter and ultimately invest in our storage arrays to sustain the IOPS required by the virtual desktops. The pain-point is here, hiding behind there four letters I.O.P.S
In a nutshell, IOPS (Input/Output Operations Per Second) is a standard performance measurement used to benchmark storage devices (Hard drives and Solid State drives) and storage area networks (SAN).
The specific number of IOPS possible in any system configuration will vary in many ways: the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes. Other factors can affect IOPS, and the user workload is definitely the one I’m focussed on in this article.
Say, i’m working on an Excel document, and you’re working through a web based application, this work patterns will have complete different impact on the required IOPS to keep our common experience similar and unique at the same time.
With most environments, the only way to address performance is to spread the virtual desktop load over more and more drives and storage controllers. Obviously, this either increases the cost of a virtual desktop project beyond the original budget or points the project to failure
Ever heard about “Consumerization of IT”? Users are better, and faster at creating content and consuming it? Read on: http://blogs.softchoice.com/itgrok/ssn/feeling-the-pressure-of-big-data/
We won’t be able to sustain the load, friend, I can feel it in a year or two we’ll be swamp. And frankly, i’m not sure if any C-Level or Director level will enjoy having monsters in the datacenter to sustain a BYOD or Virtual Desktop strategy.
“They” want Cheap and Efficient. “We” want Speed and Happy Users.
So where do we meet?
Without reinventing the wheel and moving LUNs around, redesign workload, rebalance RAIDs or ry to see if the virtualization layer software can help us, let’s take a step back and take a moment to think about it.
What is the “real” problem? Performance Vs Cost. Sounds familiar? I hope so…
If you’re going to the vForum in Quebec city on April 9th, i’ll be talking about it. If you are not coming, watch my post after the event ….
Have a good weekend friends.