It’s hard to compare local bus with iscsi latency.
Had we seen coming the big “X”? I believe so, since vSAN was not something new, I like to believe that the industry had planned for it, however where we saw the huge shift, is in the market opportunity and where, i think, leaders were tired to battle IOPS.
We’ll be talking about hyper-converged datacenter today and i’m hoping to be able to achieve array-SAN-Platform Vs Converged in 500 words… hum…
The Traditional architecture
Centralizing data was a few years ago the perfect opportunity to ensure data was served, and saved, efficiently. The model was solid and efficient. Virtualization came in early 2000’s and create a brand new need. Speed was the name of the game, and kept growing, asking manufacturers to reinvent themselves to remain top of mind.
We saw a shift from FC to iSCSI and rapidly data tiering for SATA, SAS and SSD taking place in array’s architectures. The reason? Very simple: speed. VM servers required a dedicated speed and VDI brought a new challenge where each desktop created bottlenecks at every layers, from the disks to the SAN to the servers to the end users experience.
It would be hard to describe how many back flips we’ve done to accommodate the requirements, and despite the most efficient architectures, the challenge remained as infrastructures grew. The costs associated with the growth outweighed the benefits.
The Hyper-Converged architecture
While it may sound very simplistic, nothing goes faster than local speed. When the operating system is installed locally, the experience is at the rendez-vous. VMware brought to life vSAN, a way to address lost-cost datacenter requirements in the absence of traditional array. Having servers sharing their local disk for VM hosting and able to “vmotion” VM from one host to another remained a strategic advantage for organizations.
However vSAN, at the time, had some limitations and offered the opportunity in the “converged and hybrid IT” battle, for new players to change the game.
I do believe Hyper-Converged is a game changer. Yes, it is meant only, at this point, to address a specific requirement had may have some limitations on big VMs, however it addresses a need and that need is well served.
Besides the fact that the secret sauce remains at the software level, the aggregation of local disk capacity and speed, comes in at a very sweet moment, where organizations are looking at understanding the benefits of virtual desktop. The long term shows that remote workers will be privileged for organizations, and providing an efficient, corporate like, environment, will require a transparent experience for users.
In the Datacenter, the growth of virtual desktop, is precisely addressed by a hyper-converged infrastructure. In a tiny datacenter footprint, you are not able to find a “fully like” traditional architecture, where all VM have accessed to a pooled storage and compute technologies.
This is just the PART1 of the blog where we started to see the variation between traditional and hyper-converged architectures. We all know very well the traditional architectures, so I will be covering in the PART2, the hyper-converged architecture and will perform a “deep dive” … in 500 words… or so
Have a good weekend friends.