Be #Hybrid… #IoT will eat up the cloud

I’ve been surfing the HPE’s wave since 2015. And wave is a small word. Call it a Tsunami! HPE has deployed all strengths to better focus. It’s no lie to say that HPE has completed transformed and looks forward. Think about it: #Aruba, #SimpliVity, #NimbleStorage, #OneVien, #Synergy, #Niara, #CloudCruiser and so much more.

If It’s not been Hybrid ready, Cloud lover, i don’t know what is 🙂

Not to say cloud is not a destination, it has its place and fits well in many lines of businesses, but can’t be however the sole strategy of an agile, fast and growing IT journey.

Think differently before it’s too late

The biggest challenge we all face is growth and data efficiency. It’s to be blind not to admit that everything we do is about increasing our organizations profitability by bringing  innovative solutions to business challenges. Came across a Gartner’s article (http://blogs.gartner.com/thomas_bittman/2017/03/06/the-edge-will-eat-the-cloud/) recently, revealing when what we hear is all about the cloud. But is the cloud, while very attractive, the end of the road? Difficult to say, when applications keep growing, data keeps flowing in, and IoT keeps putting pressure. Tomorrow belongs to the fast; will Cloud only be the answer?

Shrinking

Then if Cloud is not the answer to all things, a right mix must take place; a mix where data is on prem and can flow to the Cloud and back; with the size of data sets, this is not an easy task: comes in data efficiency.

Building the right base, the right foundation is key. Can a 60TB dataset be sent to the cloud and back? no you would say. not necessarily i can argue. We all heard about compression and deduplication and they come in every conversation. Why would i keep a large block size made of 0’s and 1’s when i have that exact same dataset already present? Or better, as the Cloud ingest data, why ingesting the same 1’s and 0’s over and over again without intelligence; that’s not called working Intelligently at the edge, that’s called throwing hardware at the problem.

Yes what i’m talking about i called Metadata, a set of data that describes and gives information about other data, a.k.a data about data. Could that be the solution then? If Metadata comes in, then ”What” is needed to execute on that data and algorithm?

Before a house comes the fondations

If we expect to have such a big house of data, the foundation is key; building a solid base that will sustain growth and the ”weight” if data is fondamental, that we think about IoT or ERP or any other need our organizations might need, data size of crucial for ”tomorrow”. The idea of having a very tiny subset of data representing the largest workload is attractive and while in storage solutions for a long time, this has reached to us, software defined architects of hyper super converged solutions; software defined, yes, but hardware accelerated, please. https://www.simplivity.com/wp-content/uploads/DeepStorage-SimpliVity-Data-Protection.pdf; why hardware accelerated? it’s a complete new conversation that I’ll be sharing in the next post, but keep in mind that nothing, NOTHING is faster than hardware acceleration for computing the most demanding data. Teaming up with edge virtualization creator is definitely the most secure path through the convergence of the datacenter we are witnessing every day.

At the end isn’t it about minimizing the risk and working with top solution? I believe so, and i wouldn’t put nay of my customers at risk but not considering solid and reliable solution.

Conclusion

When building tomorrow’s datacenter having data efficiency should be at the heart of the final decision; IoT and other applications that are coming our datacenter are eating up our options; throwing more hardware at the problem is not the solution. Think smart, look out for innovative directions and keep up with manufacturers innovating day in day out for a better tomorrow.

Posted in Uncategorized | Leave a comment

Audio Recording #325 – vSphere 6.0 upgrade 

Friends,

What a year! Just can’t believe we’re in June already and still haven’t gone around the 6.0 upgrade. Had the opportunity to chat with the folks at VMware about the upgrade surrounding 6.0.

Open conversation, interesting to see the momentum around that version and has we are approaching VMworld2015, my inbox is still full of unanswered questions related to the release 6.0 and still looking at the update paths options. What’s the best? What shoulwe do with PSC? Small questions but together are creating a drastic change in the way we need to look at the infrastructure’so architecture.

Following The talk show in May 2015 https://florenttastet.wordpress.com/2015/05/17/audio-recording-310-vsphere-6-0-with-martin-yip-bloggers-from-vmware-communities-roundtable/ looking at yesterday’s talk show on upgrade related to vSphere 6.0 Upgrade with Corey from the vExpert communities & Bloggers’ From ‘VMware Communities Roundtable’
In case the link doesn’t work : 

http://www.talkshoe.com/resources/talkshoe/images/swf/lastEpisodePlayer.swf?fileUrl=http://recordings.talkshoe.com/TC-19367/TS-980944.mp3

Posted in compute power, Converged infrastructures, Datacenter, Hybrid infrastructures, Monitoring, SDDC, Server virtualization, Storage, Uncategorized, vExpert, vmware | Leave a comment

Audio Recording: ‘#310 – vSphere 6.0 with Martin Yip & Bloggers’ From ‘VMware Communities Roundtable’

Folks,

It has been a busy Spring 2015 this year. Between new vSphere, training, certification and study, the blog hasn’t been very active.

Today, was the opportunity to bring back to the surface the February 2015 Podcast with Eric Nielsen (@ericni25) Audio Recording: ‘#310 – vSphere 6.0 with Martin Yip & Bloggers’ From ‘VMware Communities Roundtable’.

Working on a few posts to come out, stay tune.

Florent Tastet

Posted in Hybrid infrastructures, SDDC, Server virtualization, vExpert | 1 Comment

Hyper-convergence – Part1

HyperConverged Despite all efforts deployed to strengthen storage arrays, iscsci connectivities and platforms, the reality is that nothing runs faster than local access, period!

It’s hard to compare local bus with iscsi latency.

Had we seen coming the big “X”? I believe so, since vSAN was not something new, I like to believe that the industry had planned for it, however where we saw the huge shift, is in the market opportunity and where, i think, leaders were tired to battle IOPS.

We’ll be talking about hyper-converged datacenter today and i’m hoping to be able to achieve array-SAN-Platform Vs Converged in 500 words… hum…

The Traditional architecture

Centralizing data was a few years ago the perfect opportunity to ensure data was served, and saved, efficiently. The model was solid and efficient. Virtualization came in early 2000’s and create a brand new need. Speed was the name of the game, and kept growing, asking manufacturers to reinvent themselves to remain top of mind.

We saw a shift from FC to iSCSI and rapidly data tiering for SATA, SAS and SSD taking place in array’s architectures. The reason? Very simple: speed. VM servers required a dedicated speed and VDI brought a new challenge where each desktop created bottlenecks at every layers, from the disks to the SAN to the servers to the end users experience.

It would be hard to describe how many back flips we’ve done to accommodate the requirements, and despite the most efficient architectures, the challenge remained as infrastructures grew. The costs associated with the growth outweighed the benefits.

The Hyper-Converged architecture

While it may sound very simplistic, nothing goes faster than local speed. When the operating system is installed locally, the experience is at the rendez-vous. VMware brought to life vSAN, a way to address lost-cost datacenter requirements in the absence of traditional array. Having servers sharing their local disk for VM hosting and able to “vmotion” VM from one host to another remained a strategic advantage for organizations.

However vSAN, at the time, had some limitations and offered the opportunity in the “converged and hybrid IT” battle, for new players to change the game.

I do believe Hyper-Converged is a game changer. Yes, it is meant only, at this point, to address a specific requirement had may have some limitations on big VMs, however it addresses a need and that need is well served.

Besides the fact that the secret sauce remains at the software level, the aggregation of local disk capacity and speed, comes in at a very sweet moment, where organizations are looking at understanding the benefits of virtual desktop. The long term shows that remote workers will be privileged for organizations, and providing an efficient, corporate like, environment, will require a transparent experience for users.

In the Datacenter, the growth of virtual desktop, is precisely addressed by a hyper-converged infrastructure. In a tiny datacenter footprint, you are not able to find a “fully like” traditional architecture, where all VM have accessed to a pooled storage and compute technologies.

Conclusion

This is just the PART1 of the blog where we started to see the variation between traditional and hyper-converged architectures. We all know very well the traditional architectures, so I will be covering in the PART2, the hyper-converged architecture and will perform a “deep dive” … in 500 words… or so

Have a good weekend friends.

Posted in Uncategorized | Leave a comment

Are you functional or Optimal? 

vSOMI still remember to this day seating in a room full of executives and listening to a presentation where the status of a datacenter refresh didn’t make much sense to me. It was in 2o09, 5 years ago already, and the project was surely not optimal.

When you are part of a 2 millions project, a question you don’t want to answer is: “is our investment to this date optimal?” But more important you don’t want to hear an answer like: “we are functional”.  Felt like a disconnect between the business and the technology.

I’m a huge supporter of proactive intervention and knowing what’s going on under the hood. I believe that understanding where your performances are, in fact shows where you stand and what you need to do to reach the level you’ve expected to reach or making sure you’re setup the way you deserve to be setup.

Assessing….

In a virtual environnement, measuring the performances can be tricky and you’ll need to ensure you really understand what is going on. It’s not enough to have the right amount of IOPS, or controller buffer or disks or connectivity. Each block requires its own attention, and this complex architecture requires to be optimal. Functional is not an option.

If you reach the level where you’ll need to assess a project from a technical standpoint and understand where you really are, you’ll need to rely on tools like IO Analyzer.

VMware I/O Analyzer, from the vmware labs, is an integrated framework designed to measure storage performance in a virtual environment and to help diagnose storage performance concerns. I/O Analyzer, an easy-to-deploy virtual appliance, automates storage performance analysis. I/O Analyzer can use Iometer to generate synthetic I/O loads or a trace replay tool to deploy real application workloads. 

It is form far the best tool I have ever used, due to its simplicity an efficiency at providing you with the right information that can be use to improve an overall design. But more important, it is a tool the manufacturer and leader of Software Defined Datacenter is providing and not something we should overlook.

This will get you running and get you into an intelligent conversation where numbers speaks and results are showing where precisely the project stands.

As far as measuring the overall performance of a virtual infrastructure from the compute side of things, many are looking at vSOM. While I trust the tool, if integrated in an existing environment, the tool will only report on what is currently running and it might be difficult to understand what is under performing. You’ll need a strong level of knowledge in virtual environment operation to adjust the thresholds to where they will speak to you and your lines of business.

AN interesting metric from vSOM is the capacity report or “health check” that I always particularly like. It provides insights on Best Practices that should be implemented. You will have to read the report and take into consideration your own environment or course, but essentially it will provide you with Best Practices, and we all love Best Practices, as they defined the foundation of the environment and bring a ground work where you can consider your environment optimal.

Always refer to the following for Compute Performances Best Practices:

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-monitoring-performance-guide.pdf

http://communities.vmware.com/servlet/JiveServlet/downloadBody/23094-102-2-30667/vsphere5x-perfts-vcops.pdf

Designing

It may be a question of planning, design, architecture or simply calculation. Regardless of the reasons you are where you are, the fundamental basics of storage will have a huge impact on your designs, and will translate in a disconnect between the business and the technology decisions when assessing the project.

Design is a sweet balance between performance and capacity and it has a large impact in virtual environnements. Design choice is usually a question of:

How many IOPs can I achieve with a given number of disks?
• Total Raw IOPS = Disk IOPS * Number of disks
• Functional IOPS = (Raw IOPS * Write%)/(Raid Penalty) + (Raw IOPS * Read %)

How many disks are required to achieve a required IOPS value?
• Disks Required = ((Read IOPS) + (Write IOPS*Raid Penalty))/ Disk IOPS

You may also want to refer to the following URLs for Storage based Best practices:

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-monitoring-performance-guide.pdf

http://communities.vmware.com/servlet/JiveServlet/downloadBody/23094-102-2-30667/vsphere5x-perfts-vcops.pdf

Getting ready for Cloud Computing

I understand the various challenges that exist in adopting a new technology. I mean we can innovative… and we can be innovative. I believe that Cloud Computing is essentially a question of culture and definitely a strategy that will be in our every days for the next couple of years.

But you’ll not be able to have that conversation if your internal datacenter is not optimal. Functional will typically translate in failure when moving the cloud. Being optimal will ensure every aspects of your infrastructure will be highly measured and responding properly to workloads once out of your premises.

That you’re looking at Amazon, vCHS, SoftLayer, Azure or any other local cloud providers, they’re all providing the right environment for your workload to be optimally responsive. The only failure that can happen is your workload itself causing disruption in the cloud ecosystem, translating savings into costs.

I have witnessed in a recent project a hybrid cloud approach where non mission-critical was hosted outside the main premises, while the mission critical stayed internally.

To that I say why not

Posted in Uncategorized | Leave a comment

Yesterday was siloed. Today is integrated.

scientrix-epm-illustration-seamless-integrationI received a question lately from an acquaintance asking a plain simple question: “what do I need to create a cloud?”.

Behind this simple question lays a constellation of answers and in fact, far more business related questions than simply aligning a list of items. Remember that “cloud” equals “service” and therefore the underlying components should not be a focus area.

When you look at building a “cloud like” datacenter, two avenues exist: converged and hyper-converged.

You always need to define your unique needs when it comes to services you want to deliver to an organization and typically it gets defined by the objectives expressed at the corporate and Lines of Business levels.

An evolution

It has been an amazing journey in the datacenter. Driven by cost reduction we have witness the evolution of the typical silo approach where dedicated resources focus on making shine individually the storage, network and compute layers to an hypervisor based convergence, where service level drive the consumers (a.k.a users, clients…etc) experience, abstracting in the process the backend infrastructure.

While the hypervisor based convergence led the front-end strategies for many years, quickly the industry realized that the backend required a review to follow with increased SLA and highly available applications, and so entered the so well-known “reference architectures” and “Converged datacenter”.

Having a backend hardware precisely aligned , making sure that every aspects are perfectly working together and are supported by manufacturers, enabled the front-end to benefit of highly reliable foundations and enhanced the services delivered to the consumers community.

Still, the needs to increase time to markets required manufacturers an innovative approach. If you haven’t heard about it, it’s called “hyper-converged”.

Converged infrastructure

There are two approaches to building a converged infrastructure. The building-block approach or the reference architecture.  The first one, building-block approach (VCE, HP) environment, involves fully configured systems — including servers, storage, networking and virtualization layer — that are installed in a large chassis as a single building block.

The infrastructure is expanded by adding additional building blocks.

While one of the main arguments in favour of a converged infrastructure is that it comes pre-configured, it also plays against it

From an architectural point of view, all aspects are pre-configured and allow a sweet and easy roll-out. If your needs differs from the proposed predefined solution, you’re essentially out of luck. And so it applies to the components themselves. Each are selected  and configured by the manufacturer and the options to select a different component would either not be supported or simply not functional.

The last aspect is the patching. The building-block approach forces the deployment of updates to the vendor’s timetable, rather than the user’s.

It is possible to build a converged infrastructure without using the building block approach. The second approach is using a reference architecture, such as vSPEX or FlexPod, which allows the company to use existing hardware (or new), to build the equivalent of a pre-configured Converged system (you still need to remain aligned on Reference architectures provided by the manufacturers in concern).

Hyper-Converged infrastructure.  

In a non-converged architecture, physical servers run a hypervisor, which manages virtual machines (VMs) created on that server. The data storage for those physical and virtual machines is provided by direct attached storage (DAS), network attached storage (NAS) or a storage area network (SAN).

In a converged architecture, the storage is attached “directly” to the physical servers. A precise capacity alignment between flash (SSD), SAS and NL-SAS or SATA is leveraged to service the virtual machines to the best of their needs.

On the flip side, The hyper-converged infrastructure has the storage controller function running as a service on each node in the cluster to improve scalability and resilience. Even VMware is getting into it, which was interesting to see this giant trying to catchup in a market they had not created. The company’s new reference architecture, called EVO is a hyper-converged offering designed to compete with companies such as Nutanix, SimpliVity or NIMBOXX.

The two systems by VMware, EVO:RAIL and EVO:RACK, were announced at VMworld 2014 in August. Interesting alignment as previously, VMware was active only in the converged infrastructure market with the VCE partnership.

If we look at Nutanix, the storage logic controller, which normally is part of SAN hardware, becomes a software service attached to each VM at the hypervisor level. The software defined storage takes all of the local storage across the cluster and configures it as a single storage pool.

Data that needs to be kept local for the fastest response could be stored locally, while data that is used less frequently can be stored on one of the servers that might have spare capacity.

Conclusion

As we move to a highly automated datacenter, or “Software defined Datacenter” the datacenter solutions are less and less complex to manage which pleases buyers and decision makers.

If you’re in a tech refresh it might be time to evaluate your options and ensure the next phase of your strategy relies on best of bread approach that will sustain your next 3-5 years.

Now, Converged or Hyper-Converged? The decision has never been tougher to take. Should you stay in the traditional approach? Maybe, unless your operational expenditure are shrinking.

The bottom line is to evaluate alternatives that are allowing an ease of management, because at the end of the day we surely don’t want valuable resources and talented people hands on hardware while the business needs are urging for focus on their revenue increase and tiny time to market.

Posted in Uncategorized | Leave a comment

What to get ready in your vSphere Cluster Design – Part 3

14669392603_e6129c5b3c  VMWorld was surely fulfilled with lots of announcements. The initial feedbacks are good and I believe the best is up to come.

I am still looking at some of the announcements, especially on the storage side of things and the EVORail that seems pretty compelling for what has been lately called “hyper converged infrastructures”.

I’ll drive a blog on to the topic once I complete all reviews. It will surely allow us to exchange a little on the new technologies enhancements in the Datacenter Operating System a.k.a virtualization.

For the moment, I’ll complete the series of 3 blogs started a few weeks ago entitled “what to get ready in your vsphere cluster design – Part 1” and “What to get ready in your vsphere cluster design – Part 2

Permanent Device Loss PreventionHA HeartBeatDPM & WoL and vMotion bandwidth.

As we build stronger a more reliable vSphere clusters, many aspects need to be deeply understood. And when I say deeply understood I don’t mean only knowing what it does and how it does it, but know how it will sustain the objectives set, because this is really why acquisitions are made in the first place, right? Right!

Permanent Device Loss Prevention

vSphere 5 introduced Permanent Device Loss (PDL) which improved how loss of individual storage devices was handled by All Paths Down (APD) on the storage side by providing a much more granular understanding of the condition, and provide a reaction to the condition that best fits the actual condition experienced. Wow that was a mouth full… LOL

When a storage device becomes permanently unavailable to your hosts a PDL condition is triggered through SCSI command: “Target Unavailable”  and it could be for either device that is unintentionally removed, or its unique ID changes, or when the device experiences an unrecoverable hardware error)

When deploying a cluster environment, make sure to enable permanent device loss detection on all hosts in the cluster to ensure a VM is killed during a PDL condition. You may also want to enable HA advanced setting “das.maskCleanShutdownEnabled” to make sure a killed VM will be restarted on another host after a PDL condition occurred. Think about transactional VMs for example….

HA HeartBeat … in Metro-Clusters

One of the fundamentals of cluster is Heartbeat as it keeps the state of the nodes updated. This HeartBeat is enabled, for a new host joining an existing cluster, by uploading into the host an agent and allow all agents on all hosts to communicate with eachother every 1 second. 15-seconds missed heartbeat and the host is considered down.

Locally it’s easy to manage. For dispersed architectures a little more challenging (from a vmware standpoint of course i.e vPLEX…). Latencies are playing against some of the best practices, and therefore often the data stores heartbeat is leveraged to alleviate the challenging networking conditions.

For metro-cluster (geographically dispersed clusters) I have always made sure that the number of vSphere HA heartbeat datastores is set to minimum four as some of the most influent bloggers have over and over evangelized.  Manually select site local datastores, two for each site, to maintain heartbeating even when sites are isolated.

DPM & WoL (Wake-on-Lan)

We all know DPM (Distributed Power Management) http://www.vmware.com/files/pdf/Distributed-Power-Management-vSphere.pdf , yet few are using it primarily because today’s majority of datacenters (and I am not talking about Cloud providers here) are not yet concerned by the latest legislation around power consumption (but it will change) in Canada.

A few conditions are required though. First Each host’s vMotion, networking link must be working correctly (obvious no?). The vMotion network should also be a single IP subnet, not multiple subnets separated by routers (layer2 guys, Layer2.)…

Maybe obvious but too often overlooked, the vMotion NIC on each host must support WOL. Very important as well, the switch port that each WOL-supporting vMotion NIC is plugged into should be set to auto negotiate the link speed, and not set to a fixed speed (for example, 1000 Mb/s). Many NICs support WOL only if they can switch to 100 Mb/s or less when the host is powered off… Keep it in mind.

Overall, if using DPM and WoL remember that hosts are contacted on their **vMotion interfaces** so the NICs associated with vMotion must support WoL and must be part of the same layer 2 domain.

vMotion bandwidth

I have that question often from many many many many many people. What should I consider in my vMotion bandwidth. We have a tendency to go to what’s bigger is better. When you have that option, it surely won’t hurt, but the requirement behind 10Gbps are far more than the speed; the wire, the NICs involved and the L2 switches are to be considered which usually drives costs up when in fact the requirements are not there.

Yes, bandwidth is highly important and we should pay a close attention to it. By providing enough bandwidth, the cluster can reach a balanced state more quickly, resulting in better resource allocation (performance) for the VMs therefore providing a better VM density returning a high ROI on hardware investments.

However, before jumping on the big boy, consider link aggregate and the overall cluster design. My take on that topic is that if you forecast adequately the cluster at the hardware level and ensure that all hosts are well architected, and every VM has the *appropriate amount of resources it requires to function* vMotion bandwidth will smoothly fit in 1Gbps network speed.

So it is important, but it should not be used to alleviate some architecture disfunction; vMotion is cool, but moving transactional VMs around too often is not suggested. Having planned the cluster appropriately should be targeted and IF REQUIRED, vMotion leveraged for maintenance requirement, manual cluster balance et DRS requirements but should not be considered as an operational tool that VM should depend on. It is a “feature” not a “fonction”.

Don’t forget, a VM will only best performed once well configured, on a well equipped host and when it’s the least “moved around”.

Conclusion

This blog concludes the series of 3 blogs about “What to get ready in your vSphere Cluster Design”. Little details here and there will make the difference overall in the budget you plan for your next fiscal year. I believe in providing the right information to the decision makers to help them maximize the return on investments made originally at the acquisition point of the virtualization technology.

I keep stressing the fact that vCOPS should ALWAYS be considered, and should be implemented quickly. If you’re looking for insights in your environments it’s a must and rather than planning on new capex, focus on a higher conversation and think like an investor.

If you paid $X dollars in the past for something you wanted, chances are you would want to squeeze every bit of it before considering upgrading or changing. I often see too many unbalanced clusters because some of the fundamentals are poorly implemented or wrongly leveraged.

Rely on someone you trust to help you design and brainstorm your environment. The more the better since the winner at the end would be your organization, I strongly believe that it is a fundamental exercise that we should ALL do when comes the time to evaluate where should we next be investing.

I hope this helped a little, and I am looking forward to all feedbacks.

Happy weekend friends!

Posted in Uncategorized | Leave a comment