If the answer is yes, then your backups are probably fine and well executed; you have certainly tested a restore lately a subset of the information and it worked like a charm; flawless data restore, applications back and running in a nutshell: paradise.
Take a picture of yourself: you’re a genius and part of a select group of people!
For the rest of us, it will happen, we can’t avoid it. One day, we will (or already did) experience data loss. It could be a simple PC breakdown, a hard-drive crash, or something much worse. Neglecting to protect mission-critical applications and data, even a tiny disaster can have a devastating impact on an organization.
The financial impacts of data loss are daily running on the web, and often the question is always asked, and never answered: How much money does your business stand to lose for every hour or day your workers sit idle, unable to deliver products or services to customers?
According to a recent study, the average loss per hour is $12,500 for a SMB organization and up to, per hour, $60,000 for a medium sized Enterprise organization. Rather than finding out through bitter experience what a data loss will be, now is the time to put in place a backup plan and solution that will keep data and applications safe and ready to restore whenever needed.
- 40-60% of SMBs never re-open after a data disaster
- If a small business can’t resume operations within 10 days following a natural disaster, it probably won’t survive
- Only 23 % of businesses back up daily
- In the past 12 months, the typical SMB experienced 6 computer outages
- The median cost of downtime for an SMB is averaging $12,000 per day after a data loss disaster
These are some statistics obtained following a recent business survey. It also pointed that less than 20% of small and medium-sized enterprises (SMEs) back up all of their data, with 88% of businesses having lost critical data within the last two years. That is not counting on the businesses that are taking backups but have never tried to restore anything from these backups hence not sure if the backups are actually valid or not until they get either an audit or are asked to restore data outside of their internal backup policy.
I’ll be talking today about default principles of backups, and the options available, since virtualized environment are making our every day a living dream.
Data Backup Principles
Don’t be surprise by what you’re about to read. It seems too simple, but i keep seeing it not applied over and over, whenever i walk in a datacenter. To reduce the risk of losing critical business information (and money), a complete protection of all important files and mission-critical applications is mandatory. Among the causes that could affect an organization health (and your family weekend or evenings, in one word, your life), natural disasters, system crashes, theft, and cyber attacks can all lead to data and financial crisis.
So be prepared for the worst: if it happens you’ll be ready; if it doesn’t… well it doesn’t, but you’ll surely sleep better at night. And to be prepared, there’s no need to have a Master Degree in Disaster Recovery; you need to follow two simple basic principles to keep your data safe, your business running and yourself sleeping…
Principle #1 : Back up ALL of your data
It might sounds like an old broken record, but effective Disaster/Recovery is based on backing up ALL of the data… and i say “ALL” the data, that means don’t settle for a backup that completed at 98 or 99%. Unfortunately, due to the inability to keep up with the speed of business, it does happen that from time to time, pieces of information are missed.
Principle #2: Don’t waste time and space
Yes, you need to back up all of your data, but are you seriously still backing up more than once the same information? i have some news for you: you only need to back it up once. I’m sorry to say that but backing up the same files day after day, or from multiple locations, is a poor use of resources (and your time)
Hardware: what are my options?
Pretty straight forward, the options remain around the main 4 ones: Tape-Based, Disk-to-Disk, Cloud or Remote Server.
Certainly one of the oldest, and prove, technology available; i’ve been in IT for more than 18 years, and it is only recently that other technologies are rising and taking a significant piece of the market (see D2D backups below). Tape-Base Backups uses backup software and a tape drive to create a tape for both onsite and offsite storage. Although very well known and highly efficient when it comes to data retention, this technology lacks today’s most demanding expectations that are part of today’s agile datacenter: efficiency and speed on backups and restores.
Certainly in the newest technologies entering the backup world, it gets more attention as time advance and “Software Defined Datacenter” takes place. An additional server or a Storage Area Network (SAN) is installed to store data backups. The backups complete quickly, and efficiently and restores can be done from several different points in time. The goal was and is to optimize virtual content, in other words, virtual machines, backups and recovery by processing the required backup are the guest and image level through the host.
Cloud (Internet)-based backup
Part of the newest options and technologies entering our datacenters, backup data is transferred to a server or backup device over the Cloud (Internet) to a provider of remote backup services. This is almost always done using a D2D backup as described above, so that the data can be compressed and transmitted efficiently. Restore times are lengthened by the need to transport the files from the remote location. If you’re concerned by data privacy, this might not be the right solution for you, other it could be very cost efficient by bringing the cost of backups to as little as $.05/TB
Leveraged by today’s various virtualization vendors as what is refers to as “BCDR” (Business Continuity Disaster Recovery), files are replicated over a WAN or Internet link to a redundant server in real-time. This provides quick recovery for any problem with the production server – with the proper software; the replica server can stand in for the production server without the users realizing that there was any interruption. Often referred as automatic failover, or HOT DR, it ensures a transparent and seamless continuity of business activities.
Software solutions for Virtualized environments
I hope that your datacenter is heavily virtualized; if it is not, you’re missing out. And if it is i hope you have reviewed your backups schedules and strategies! It isn’t a big news that we shouldn’t backup our virtualized environments the same way we’ve done backups on our physical servers for the last 15 years. Most of us are aware of that; but too often i keep hearing that a datacenter’s virtual content was lost by poor backups schedules and strategies, resulting in a very frustrating situation, where data can not be retrieved.
Traditionally we have seen an agent within the server we wished to backup. That agent, communicating with a backend backup server, was executing the backups tasks locally on the server to be backed up, and it worked great for 15 years plus. Naturally it brings a high level of credibility.
When hypervizors have entered the datacenters, most of us were confused on the way to backup virtual contents hosted on far fewer “hosts” while that virtual content was growing and exploding as organizations took that edge to grow beyond the highest expectations. We’ve hit the road with what we knew best: an agent inside the virtual content and as virtual machines expended we’ve quickly realized how wrong we were.
We had to reinvent how backups were performed in virtualized environments. Quickly vendors jumped in the opportunity to reinvent the wheel and highly succeeded. In a nutshell, no more agent within a virtual machine to perform a backup. No No NO! Instead, an agent, outside the virtual machine, at the host level, is talking to the hypervizor and request that “snapshots” gets performed on the virtual machine targeted to be backed up, ensuring the virtual machine is fully backed up, application hosted keeps running and safe restore are planed accordingly.
These new techniques, leverage variable-length deduplication, reducing backup time by storing unique daily changes while maintaining daily full backups for immediate and single-step restore. The revolution has started… deduplication.
The beauty of these new technologies was, and still is, positively impacting our replications to second sites for enhanced protection of the data backed up. Backups are indeed deduplicated! Backups “deduped” sends only changed blocks, reducing network traffic.
Smaller backups, compressed, kept locally while a copy is send, over the wire, to a secondary location for an enhanced protection of our crucial data, is leading the datacenter transformation of backups.
Among the pros for this established technology: reduced backup time through NDMP acceleration, eliminate lengthy, level-0, full backups, Protection of data at the edge and reduce IT dependence with end-user, self-service restore.
It centralizes and streamline remote office backups and recoveries while ensuring application-consistent backup and recovery for the big ones such as IBM, Microsoft, Oracle, and SAP enterprise applications providing high-performance dedup while giving advanced visibility and control for application owners.
In conclusion: The Importance of Backups
A properly planned and implemented backup process is vital to all. Failures can be covered by warrantees or can be fixed by purchasing replacement parts. Insurance can be purchased to cover damage.
Business-critical data lost due to equipment failure is almost impossible to recreate and replace. The data backed up represents many hours of effort over a long period of time and recreating this information is next to impossible.
There is no substitute for a properly functioning backup and a fine tuned strategy allowing an agile and performant data life cycle management. Data is created, lives and “dies”. It creates a 3 zones area where data can be seen as hot, cool and cold. Backing up each layer is a sensitive activity and a crucial operation for the health of an organization.
Back it up! All of it! and make sure it is efficient and functional. THAT is the mission