EMC, SDS and more…

Image

Yes friends, it was been some times now since EMC released their revolutionary VNX family, classified as unified storage. Unified in the way that EMC was treating data, aggregating block and files together, under the same roof, same management, offering a single point of control for all protocols and data type.

in a way it was revolutionary in the offering: a single platform, a single point of data management for two types of data

And EMC has tremendously added to their initial offering. FAST has gone better, tiering leading the charge, VPLEX came in, adding a compelling offering on multi-site deployments and data abstraction addressing synchronous data availability, data archiving and large unstructured data management has revealed its true colors, through Atmos and iSilon, and overall, EMC positioned itself as a data life cycle management leader.

Before we dive into the core content, i want to say that i will pass under silence Xtrem ..“xx” (Cache, IO, SW..etc). Not that it is not because I don’t believe it is not an important topic to cover but it would require a loooong chat.

Juts keep in mind on the “Xtrem..”xx”” topic that blocks of data are always better served on the server side.

This blog will cover VNX 2.0 and ViPR offerings, touching the SDS and integration with SDDC

So where are we today? VNX 2.0

Saying EMC has completely revamped their software and hardware wouldn’t do justice to the work behind the new storage generation announced. In essence they haven’t reshaped the box, and the colors haven’t changed (Thank god, the box looks just fine), but it’s under the hood that everything is happening, and this is exactly where EMC has focussed their energies. After deep thinking, i thought about resuming the VNX 2.0 performances in one phrase: any array service, on any core, all the time!

i’m proud of this one to be frank. Yes i might make some smiling, but it’s the plain truth, and this is how EMC addresses tomorrow’s challenges and performance bottleneck. True leading technologies are driving changes. SSD is hitting hard and strong the performance VS capacity space, and when you try to align growth and cost, the business requirements are often challenging to align.

And you’ll need help from the controllers. The IOPS have gone wild, and as we are moving forward in a complete Software Defined Data Center, we wouldn’t want to struggle at that level.

MCx

Truly different in the performance area, we’ll be able to witness in the coming months and years the outstanding results from MCx. Assigning specific tasks such as RAID to specific Cores was in a way, a true guaranty of performance, however as Intel increased the number of cores per socket, this direction became a bottleneck and limited the overall array performance, bringing an unfair balance, overall, for all the other tasks requiring processors attention’s.

MCx allows a faire balance of performance across ALL cores for ALL tasks, such as RAID calculation, FAST and writes-reads to only name a few. i believe this will bring a larger performance portion to the experience, as the backend IOPS will be aligned with the performances of the controllers providing linear scaling across cores. Knowing that FLARE pre-dated multi-core x86 CPUs, it was time for a makeover.

What’s in it for you?? Overall it brings a smoother and more robust alignment and opportunities in many areas of the array, such as Multi-Core Cache (MCC), Multi-Core RAID (MCR), Multi-Core FAST Cache (MCF) and Active / Active data access. Right out of EMC technical training, let’s take a look at, MCC, MCR, MCF and Active/Active data access.

Multi-Core Cache

With MCC, the cache engine has been modularized to take advantage of all the cores available in the system.

There is now also no requirement for manually separate space for Read and Write Cache, meaning no management overhead in ensuring the cache is working in the most effective way regardless of the IO mix.

Note that data is not discarded from cache after a write destage (this greatly improves cache hits).

What’s in it for you?? The new caching model employs intelligent monitoring of pace of disk writes to avoid forced flushing and works on a standardized 8KB physical page size.

Multi-Core RAID

Helping you from the get go, MCR is addressing the handling of hot spares. Instead of having to define disks as permanent hot spares, the sparing function is flexible and any unassigned drive in the system can operate as a hot spare.

There are three policies for hot spares:

  1. Recommended;
  2. No hot spares
  3. Custom.

The “Recommended” policy implements the same sparing model as today (1 spare per 30 drives).

What’s in it for you?? Interesting in architecting the VNX 2.0 is the idea of “Portable drives”, where drives can be physically removed from a RAID group and relocated to another disk shelf.

Multi-Core FAST Cache

All DRAM cache hits are processed without the need to check whether a block resides in FAST Cache, saving this CPU cycle overhead on all IOs.

Symmetric Active / Active

With Symmetric Active/Active, EMC are delivering true concurrent access via both SPs (no trespassing required). Only supported for Classic LUNs, It does have benefits for Pool LUNs by removing the performance impacts of trespassing the allocation owner.

What’s in it for you?? Trespassing is past.

SDS much like SDDC

Far my preferred topic of the announcements as i drink the SDDC conversation like juice. Aligning with VMW SDDC, where *everything* is abstracted, the natural evolution is to abstract the storage… in a vendor agnostic way.

Yes, many have done it before, but SDS lead by ViPR differentiates itself from the competition on many angles, and for one, the tight integration with VMware while revolutionizing how data is handled, decoupling Control plane from Data Plane, two networking terms applied to iScsi and storage world. (read more here http://en.wikipedia.org/wiki/Routing_control_plane and here http://en.wikipedia.org/wiki/Telecommunications_network)

If you were at the VMware vForum in Montreal in October 2013, i had the opportunity to deliver VMware SDDC message, and quickly touched on the SDS conversation. If you remember, i said it: great stuff, SDDC, but what about the storage?

ViPR

Right out of EMC informal training, “EMC ViPR is a storage virtualization Software Platform that abstract storage from physical arrays (file, block or object based) into a pool of virtual shared storage resources that enables a flexible storage consumption model accross physical arrays and the delivery of applications and innovative data services. ViPR abstract the storage control path from the underlying hardware arrays so that access and management of multi-vendor storage infrastructures can be centrally executed in software

i think they did a good job at giving an overview of the offering. What passes under the radar is the fact that, while ViPR support most of EMC solutions, it also support 3rd vendors (of course) but more precisely in the first wave, Netapp. Surely someone had the “Dr Evil” from Austin Power movies, laugh when the idea was realized and released; what better target that the “Network Appliance”, leader in NAS space??

But what’s the challenge?  Where this applies??

Software-defined compute is something that has been going on for years already and most vendors that sell hypervisors have a lot of work into the software-defined area.

I like to refer to storage offerings as “X” and “Y” axis or scale out, scale up models (also referred to as “Horizontal and Vertical” models). In the “X” axis of storage, a hardware layer with software overlooking. A true scale-out model. By adding generic disks and SSDs in this hardware platform, you now can create a distributed software layer on top of this hardware, and have all the hardware working together to create a fully software driven storage array.

In this “X” axis, often challenges with performance, malfunctions and resulting availability are creating stress.

In the “Y” model, still dedicated hardware for storage is present. The positive side of this is a vast choice of platforms, maturity, higher availability and performance guarantees. The “Y” axis, caused challenges in SDS because specific hardware is used and require specific controls for orchestration and monitoring of data movement.

This is where the challenge arise: What if I have multiple models of storage, or even worse, storage platforms from different vendors? What if a SAN is replaced by its more recent version?

ViPR delivers a solution to the challenges discussed above. ViPR takes control of the physical storage infrastructure, and delivers storage from the storage devices to the host(s). All of this through a standardized, open API using policy-based terminology. Through this one standard API, ViPR will deliver storage to hosts, whether this storage is an “X” or “Y” type of array.

No matter where the physical storage comes from, that same API is used and data controlled from.

I spoke with customers and peers about the offering; the feedback was uniform: don’t we already have storage virtualization with things like VPLEX, NetApp vFiler, HDS VSP, IBM SVC and the like?   The Answer is “yes” but does it truly goes as far as we need it to go?

Will it do it? i believe so as dedicated team at EMC have been tightly working on control plane and data plane, and abstracting one from the other and from the hardware, allowing SDS to raise among the vendors as a unique differentiator. Will we see a pure software + bring your own hardware = SDS formula of storage in the near future? Yes, but persistence of data will lead these decisions.

Now let’s not get it wrong and surely will need far more exposure to the solution. Software often equals wrong utilization of the solution, and if we don’t get it right we’ll surely end up having looonnnng weekends articulating it.

Reading throughout the web the competition is already gearing up on the solution. As said, so far, what comes out is either a poor understanding of policy driven storage or simply “hey we’ve been doing for years, and here’s our product that does better”.

At first sight, ViPR delivers something unique to the market and EMC, much like VMware did in the compute area, is redefining the storage datacenter and how we look at data and live with it and surely, how it extends to the cloud.

From a virtualization standpoint, ViPR enables vCO and vCAC to agnostically chat with a pool of storage underneath, and be able to request a certain amount of categorized storage to be delivered to hosts, without the hassle of finding the right tier on the right device with enough capacity and performance…

With VMware leading a successful “software-defining” compute conversation and quickly ramping up “software-defining” networking, isn’t it nice to see the storage vendor steeping up and defining the “SDS”? i think so.

My guess is that it won’t be long before we can truly order virtual datacenters out of a very generic physical one through a simple web interface displaying a menu with side orders. Feels like a restaurant don’t you think?

Or is it already there and Nutanix is trying to pierce before the gorillas arrive?? Oups…. did i said that?? yes i did…

Conclusion

Anyway, those are my very personal and initial feeling and thoughts. Although the hardware is a pretty straight forward conversation, and MCx a differentiator in this conversation for the vendor, I haven’t had access to more detailed information on ViPR and what it does and doesn’t support or true deep dive on it, therefore, part of this blog could be readjusted over time or deeper reviewed, but i believe than in essence, we are at the beginning of a new journey, a journey where storage, much like hypervizors, is reaching out to the cloud.

on that note friend THANK YOU for reading

References

i have done extensive researches on the topic of ViPR that i believe is the future of storage abstraction. Take some time to watch:

Deep dive http://youtu.be/eCL6cKSjGDk

Blended Object and File use cases http://youtu.be/lijqoa6dPiY

SRM integration http://youtu.be/zAfF_UQZ4fc

vCAC integration http://www.youtube.com/watch?v=ldMRwN-NtMI

VASA integration http://www.youtube.com/watch?v=TQKhOFR8koM

and read

http://www.emc.com/collateral/handout/h12176-vipr-story-epub.pdf

http://www.emc.com/about/news/press/2013/20130904-03.htm

Advertisements

About florenttastet

As an IT professional and leader, my objective is to help an organization grow its IT department with new and innovative technologies in order to have production at the most efficient level ensuring the right alignment in the deployment of such technologies through a precise Professional Services results in a extraordinary experience for the customer. Team member of multiple projects, I have developed a strict work ethic allowing development of superior communication skills, as well as the ability to multi-task and meet precise deadlines. As an IT veteran with a broad background in consulting, management, strategy, sales and business development, I have developed an deep expertise in virtulization using VMware and Citrix products with a strong skillset on Storage Arrays (HP, EMC, Netapp, Nimble & IBM). I have also developed a Security practice through CheckPoints NGX R65-R66 (CCSA obtained) and Cisco PIX-ASA product line. Specialties: Microsoft infrastructure products; Monitoring HPOV, SCOM, CiscoWorks, Firewalls: checkpoint, PIX and ASA. Virtualization with VMware (ESX through vSphere & View/Horizon), Microsoft (Hyper-V server, VDI and App-V), Citrix (Xenserver, XenDesktop, Xenapp), Storage (EMC, HP, Netapp, Nimble & IBM), Reference Architectures and Converged Datacenters (vSPEX, Flexpod, vBlock, PureFlex & HP Matrix)
This entry was posted in Cloud, Datacenter, SDDC, SDS, Storage virtualization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s