Behind this simple question lays a constellation of answers and in fact, far more business related questions than simply aligning a list of items. Remember that “cloud” equals “service” and therefore the underlying components should not be a focus area.
When you look at building a “cloud like” datacenter, two avenues exist: converged and hyper-converged.
You always need to define your unique needs when it comes to services you want to deliver to an organization and typically it gets defined by the objectives expressed at the corporate and Lines of Business levels.
It has been an amazing journey in the datacenter. Driven by cost reduction we have witness the evolution of the typical silo approach where dedicated resources focus on making shine individually the storage, network and compute layers to an hypervisor based convergence, where service level drive the consumers (a.k.a users, clients…etc) experience, abstracting in the process the backend infrastructure.
While the hypervisor based convergence led the front-end strategies for many years, quickly the industry realized that the backend required a review to follow with increased SLA and highly available applications, and so entered the so well-known “reference architectures” and “Converged datacenter”.
Having a backend hardware precisely aligned , making sure that every aspects are perfectly working together and are supported by manufacturers, enabled the front-end to benefit of highly reliable foundations and enhanced the services delivered to the consumers community.
Still, the needs to increase time to markets required manufacturers an innovative approach. If you haven’t heard about it, it’s called “hyper-converged”.
There are two approaches to building a converged infrastructure. The building-block approach or the reference architecture. The first one, building-block approach (VCE, HP) environment, involves fully configured systems — including servers, storage, networking and virtualization layer — that are installed in a large chassis as a single building block.
The infrastructure is expanded by adding additional building blocks.
While one of the main arguments in favour of a converged infrastructure is that it comes pre-configured, it also plays against it
From an architectural point of view, all aspects are pre-configured and allow a sweet and easy roll-out. If your needs differs from the proposed predefined solution, you’re essentially out of luck. And so it applies to the components themselves. Each are selected and configured by the manufacturer and the options to select a different component would either not be supported or simply not functional.
The last aspect is the patching. The building-block approach forces the deployment of updates to the vendor’s timetable, rather than the user’s.
It is possible to build a converged infrastructure without using the building block approach. The second approach is using a reference architecture, such as vSPEX or FlexPod, which allows the company to use existing hardware (or new), to build the equivalent of a pre-configured Converged system (you still need to remain aligned on Reference architectures provided by the manufacturers in concern).
In a non-converged architecture, physical servers run a hypervisor, which manages virtual machines (VMs) created on that server. The data storage for those physical and virtual machines is provided by direct attached storage (DAS), network attached storage (NAS) or a storage area network (SAN).
In a converged architecture, the storage is attached “directly” to the physical servers. A precise capacity alignment between flash (SSD), SAS and NL-SAS or SATA is leveraged to service the virtual machines to the best of their needs.
On the flip side, The hyper-converged infrastructure has the storage controller function running as a service on each node in the cluster to improve scalability and resilience. Even VMware is getting into it, which was interesting to see this giant trying to catchup in a market they had not created. The company’s new reference architecture, called EVO is a hyper-converged offering designed to compete with companies such as Nutanix, SimpliVity or NIMBOXX.
The two systems by VMware, EVO:RAIL and EVO:RACK, were announced at VMworld 2014 in August. Interesting alignment as previously, VMware was active only in the converged infrastructure market with the VCE partnership.
If we look at Nutanix, the storage logic controller, which normally is part of SAN hardware, becomes a software service attached to each VM at the hypervisor level. The software defined storage takes all of the local storage across the cluster and configures it as a single storage pool.
Data that needs to be kept local for the fastest response could be stored locally, while data that is used less frequently can be stored on one of the servers that might have spare capacity.
As we move to a highly automated datacenter, or “Software defined Datacenter” the datacenter solutions are less and less complex to manage which pleases buyers and decision makers.
If you’re in a tech refresh it might be time to evaluate your options and ensure the next phase of your strategy relies on best of bread approach that will sustain your next 3-5 years.
Now, Converged or Hyper-Converged? The decision has never been tougher to take. Should you stay in the traditional approach? Maybe, unless your operational expenditure are shrinking.
The bottom line is to evaluate alternatives that are allowing an ease of management, because at the end of the day we surely don’t want valuable resources and talented people hands on hardware while the business needs are urging for focus on their revenue increase and tiny time to market.