Are we failing the true business needs? Part 4

computerThe Platform Pillar

This is the fourth blog on “are we failing the true business needs”; what we have seen so far is that many conversations are not precisely focused on the end user computing side of IT while in fact the end users are directing more than ever the changes we are seeing in the datacenter through the consumerization of IT.

While the past showed that IT was directing how the end users community needed to work, those same users’ hard work has forced IT to rethink and reshape the way they have been delivering IT, helping IT in the process to shift from a cost centre to a strategic enabler of the Lines of Businesses. We should be thankful to them.

And the work is still is progress, as the datacenter gets redefined through technologies, today’s small to enterprise size organizations can have access to the same benefits from the Information Technology world. The ultimate goal is to provide access to applications faster, enabling users to become innovative and creative, faster, ultimately making them contributors to the organization in a shorter period of time.

We have seen that the datacenter operating system, a.k.a virtualization, led by VMware and closely followed by Microsoft, is unlocking the agility of application provisioning and consumption, ensuring in the process, that Business Continuity and Disaster Recovery are no longer a concern.

While the datacenter operating system, makes applications more reliable, the underneath storage foundation is also getting aligned with the virtualization layer, through a set of data management capabilities, resulting ultimately, in a faster and more secure way to access the data consumed and created by the end users.

Remains in the equation, the compute power, or better said, the platforms a.k.a the servers.

The computer power

Let’s set the table. None of the manufacturers have access to a technology out of this planet.

What I mean by that is, all processors and memory chipsets are coming from the same sources, and while each platform vendor may have different ways to leverage the technologies, fundamentally, a processor is a processor, and a memory stick is a memory stick… oups.. i said it.

In the server world we live in, and as far as the x86 business is concerned, two major players have always been around: Intel and AMD. Luckily for us, the low level of engineering behind it hasn’t changed. The wheel is still round, and still turns in circle.

What has truly changed is how the wheel looks and how it enables data to be compute. It might sounds simplistic but this is what we are dealing with, which probably  supported our feeling of compute power being a commodity more than anything else.

And this exactly where we are all mistaking.

Agreed that a server is a box with a processor, ram, busses and a set of peripherals connecting it to the rest of the world. Granted! But platforms are far more than that in by eyes. They are the key element that makes the data we consume, consumable. Through a complex interpretation of “1’s” and “0’s” they are translating in human language the data we are creating or consuming. For that, I don’t consider platform to be a commodity.

Often I think about us, humans, and how often we forget how important is our brain. Concretely we are able to quantify the outcomes or our hands, legs, feet, eyes, ears…etc but we so often forget that without the central processing unit, our brain, none of these would be real, and none of them would make sense, none of them would be orchestrated and driving to a successful outcome: interaction.

The processors are often the unsung heroes of the modern era and each year we benefit of faster and more efficient performance which improves not just computing, but also numerous industries and it keeps unlocking the ultimate feeling we have about technology and the responsiveness of it.

The differentiators came with cores and how processors were able to have, within the same chipset, various cycles, or better said, were able to segregate compute cycles into smaller waves of calculations. With today’s 4,6,8,10 and 12 cores what we need to see, is the continuous need for parallel processing to address the most demanding requirements from the most demanding applications used by the most demanding customers: the end users.  (

In a very simplistic way, and please don’t hold it against me, Cores are one “CPU”. So a dual core processor has the power of two processors in one. A quad core has the power of 4 processors running in one processor. And so on with 6 and 8 cores. Just think of a quad core having the power of 4 computers in one little chip. The same with dual cores.

Businesses are relying on faster calculations, faster access to stored data and ultimately faster computation of that data for a faster outcome: profitability.

Fast and Faster are two terms we are familiar with in the platform industry and what differentiate one another. So many flavours exist out there that we are fortunate to pick and choose what makes the most sense for our needs, or the needs of our end users expectations and organizations growth. I like to say that while the datacenter operating system or storage are a key element of a balanced IT conversation, none of them would play such strategic supporting role if the compute power would not be able to follow.

Commodity? I disagree. Valuable resource? Sounds much better.

What differentiate one versus another

What makes the difference between one vendor versus another is the ease of management, and the integration in our datacenter’s strategy. We are seeing a huge momentum in the datacenter through a complete Software Defined DataCenter, and we are privileged to witnessed the platform outperforming the most demanding requirements. That is not something to forget!

Now the true challenge is the integration alignment and how it will make our life easier.

Why? Well, simply because we want to be focussed on more beneficial results for our organizations through the delivery of new projects, and unlock invaluable time and financial capabilities that we can inject in the growth of our organization through new projects, hence new profitability opportunities.

Often we do not think about quantifying the time and dollars attached to our day to day, but our beloved upper managements a.k.a C-Level, do, and being able to invest in growth instead of “fixing”, makes it more appealing to any leader and investor, and we surely want to be a key element, through tactical activities, of those savings.

The rackmount format is the most available. A few “U’s” high each, it allows a cost effective way of compute power. In its form, and at very simple levels, the rackmount, is a single player. What aggregates it together is the hosted operating system through clustering technologies. It may play a crucial role in a farm architecture, but is still causing challenges when comes the time to manage it, and you can easily expend that though to the network requirements through ports and cables.

In the event of a virtual farm, the “101” rule applies: make sure you have all the same components inside: same processor family, ideally identically ram quantity. Rackmounts is the simplest form, yet most cost effective, of compute power enablement and typically are far more seen as a foot in the door for a starting project and a larger strategic alignment in the datacenter.

Where this format hurts is when you reach the level of high amount of servers, and while there’s tons of solutions and alternatives to effectively manage such deployment, it remains that fundamentally they are not typically integrated with the architecture deployed and are too often a glued solution to help alleviate the larger challenges of management and efficiency.

Truth is that rack mounted servers will always be very compelling on the short run. The costs aligns well with tight budgets, but it comes at a discreet price: OPex.

The long term might impact the time it takes for you to deliver an “as agile” compute power as you have been deploying in your datacenter around the datacenter operating system or the storage pillar. Business growth and forecast will help align what best fits the needs, but ultimately keep in the back of your mind that managing a large amount of individual servers is far more challenging on the long term than unlocking a financial capability on the short term than injecting a form of aggregated compute power solution (read Blades).

Fundamentally I like to remain aligned with the rest of the datacenter’s architecture. Balancing investments and remaining true to my needs are far more compelling than saving dollars on the short term. Chances are I will remain locked with this decision in the future and knowing (or not) what’s coming, might alter the way I see server’s investments. But I am not a CFO, nor a CEO.

The Blade format, is far more integrated and far more efficiently manageable if I can put this way. The challenge was to manage  this amount of compute power, delivered through an aggregation of platform in a very small form factor format. The platform leaders have deployed a tremendous amount of energy to make it simple and acceptable in any organizations. When you think about it, in a 19 to 40 U format you’re able to fit between 8 or 16 servers. Compelling!

Now the management of Blades infrastructures has been incredibly improved over the years. Simply because we didn’t had time to individually manage each server, the truth is the complexity of such architecture required the right tools to enable an efficient management of an aggregated compute power architecture and the true agility to move workloads form one server to another transparently to the application and the end user without relying on the hosted operating system.

So enabling the ease of management through centralized management console helped defined a much more agile need IT is servicing, to efficiently address the growing demands, while maximizing the time and efforts for it.

The commodity comes with the ease of management and true to a certain point, we don’t want to be disrupted by the need of managements that server require.

If that how we position “commodity” I’m ok with that.

On the flip side, the end users are continuously demanding more processing power, and having a very light and agile architecture definitely helps meeting those demands, quickly. Having a very agile compute power strategy that aligns with the underlying investments (read storage and virtualization) should be directing our thoughts when taking a direction that will eventually result in a very balanced architecture that everyone can benefit from.

The Blade management should be allowing a central, unique, point of management. Having to individually manage each chassis might just transfer the challenges of rack mount management into the blade architecture.


We are seeing the end users forcing a shift on the compute power side of the datacenter’s equation.

While inside the box they all offer the same different flavours, the true differentiator between one another is the management attached to it. We have worked so hard to bring agility to the datacenter and staying focussed on that aspect is a crucial element of any decision.

That you are or not making a decision or considering a change or an improvement, some very simple and key elements need to found answers:

  • Fact is that end users will not stop creating demand.
  • Fact is data will require to be faster computed and faster accessed.
  • Fact is data compute should not be a bottleneck and should be flawless the rest of your architecture.

Addressing these key questions will eventually lead towards the right solution that best fit your requirements.

That you consider rackmounts, or blades, you should not be afraid by either the technology or the cost of it, but focus on the long term in compute requirements and if you’re overwhelmed by the digital content, ensure that your compute companion will be an enabler.


About florenttastet

As an IT professional and leader, my objective is to help an organization grow its IT department with new and innovative technologies in order to have production at the most efficient level ensuring the right alignment in the deployment of such technologies through a precise Professional Services results in a extraordinary experience for the customer. Team member of multiple projects, I have developed a strict work ethic allowing development of superior communication skills, as well as the ability to multi-task and meet precise deadlines. As an IT veteran with a broad background in consulting, management, strategy, sales and business development, I have developed an deep expertise in virtulization using VMware and Citrix products with a strong skillset on Storage Arrays (HP, EMC, Netapp, Nimble & IBM). I have also developed a Security practice through CheckPoints NGX R65-R66 (CCSA obtained) and Cisco PIX-ASA product line. Specialties: Microsoft infrastructure products; Monitoring HPOV, SCOM, CiscoWorks, Firewalls: checkpoint, PIX and ASA. Virtualization with VMware (ESX through vSphere & View/Horizon), Microsoft (Hyper-V server, VDI and App-V), Citrix (Xenserver, XenDesktop, Xenapp), Storage (EMC, HP, Netapp, Nimble & IBM), Reference Architectures and Converged Datacenters (vSPEX, Flexpod, vBlock, PureFlex & HP Matrix)
This entry was posted in blades, compute power, memory, platform, processors, rackmounts, SDDC, Server virtualization, vmware. Bookmark the permalink.

One Response to Are we failing the true business needs? Part 4

  1. Pingback: Are we failing the true business needs? Part 5 | A datacenter journey

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s