I’ve been surfing the HPE’s wave since 2015. And wave is a small word. Call it a Tsunami! HPE has deployed all strengths to better focus. It’s no lie to say that HPE has completed transformed and looks forward. Think about it: #Aruba, #SimpliVity, #NimbleStorage, #OneVien, #Synergy, #Niara, #CloudCruiser and so much more.
If It’s not been Hybrid ready, Cloud lover, i don’t know what is 🙂
Not to say cloud is not a destination, it has its place and fits well in many lines of businesses, but can’t be however the sole strategy of an agile, fast and growing IT journey.
Think differently before it’s too late
The biggest challenge we all face is growth and data efficiency. It’s to be blind not to admit that everything we do is about increasing our organizations profitability by bringing innovative solutions to business challenges. Came across a Gartner’s article (http://blogs.gartner.com/thomas_bittman/2017/03/06/the-edge-will-eat-the-cloud/) recently, revealing when what we hear is all about the cloud. But is the cloud, while very attractive, the end of the road? Difficult to say, when applications keep growing, data keeps flowing in, and IoT keeps putting pressure. Tomorrow belongs to the fast; will Cloud only be the answer?
Then if Cloud is not the answer to all things, a right mix must take place; a mix where data is on prem and can flow to the Cloud and back; with the size of data sets, this is not an easy task: comes in data efficiency.
Building the right base, the right foundation is key. Can a 60TB dataset be sent to the cloud and back? no you would say. not necessarily i can argue. We all heard about compression and deduplication and they come in every conversation. Why would i keep a large block size made of 0’s and 1’s when i have that exact same dataset already present? Or better, as the Cloud ingest data, why ingesting the same 1’s and 0’s over and over again without intelligence; that’s not called working Intelligently at the edge, that’s called throwing hardware at the problem.
Yes what i’m talking about i called Metadata, a set of data that describes and gives information about other data, a.k.a data about data. Could that be the solution then? If Metadata comes in, then ”What” is needed to execute on that data and algorithm?
Before a house comes the fondations
If we expect to have such a big house of data, the foundation is key; building a solid base that will sustain growth and the ”weight” if data is fondamental, that we think about IoT or ERP or any other need our organizations might need, data size of crucial for ”tomorrow”. The idea of having a very tiny subset of data representing the largest workload is attractive and while in storage solutions for a long time, this has reached to us, software defined architects of hyper super converged solutions; software defined, yes, but hardware accelerated, please. https://www.simplivity.com/wp-content/uploads/DeepStorage-SimpliVity-Data-Protection.pdf; why hardware accelerated? it’s a complete new conversation that I’ll be sharing in the next post, but keep in mind that nothing, NOTHING is faster than hardware acceleration for computing the most demanding data. Teaming up with edge virtualization creator is definitely the most secure path through the convergence of the datacenter we are witnessing every day.
At the end isn’t it about minimizing the risk and working with top solution? I believe so, and i wouldn’t put nay of my customers at risk but not considering solid and reliable solution.
When building tomorrow’s datacenter having data efficiency should be at the heart of the final decision; IoT and other applications that are coming our datacenter are eating up our options; throwing more hardware at the problem is not the solution. Think smart, look out for innovative directions and keep up with manufacturers innovating day in day out for a better tomorrow.