24
Wed, Apr
0 New Articles

Breaking the Top Four Myths of Tape vs. Disk Backup

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Are you overlooking tape because you think it's outdated? Times have changed…and so has tape.

 

Disk as a backup target has become a key enhancement in many data centers. Disk is believed to be faster than tape, almost as cost-effective as tape, and more resilient than tape. In reality, tape has its own unique value in each of these areas. And when the fourth myth of tape—that it must be treated separately—is debunked and tape is integrated tightly with disk, the combination resolves many of the backup storage challenges facing data centers today.

Myth 1: Tape Is Slower Than Disk

One of the most common assumptions is that backing up to disk is faster than backing up to tape. Reality is that when the raw speed of tape is compared with the raw speed of disk, tape is actually much faster. When the additional housekeeping that most disk backup systems perform is factored into overall throughput, disk is even slower. This is because most disk backup systems offer some data redundancy but, to keep capacity costs down, also employ some sort of RAID protection strategy. While RAID is a more capacity-efficient technology than mirroring, RAID suffers a noticeable performance loss in write-heavy conditions. It should come as no great revelation that backup processes are extremely write-heavy.

 

In addition, again to make disk more affordable to the backup process, most disk-based backup systems leverage some form of deduplication to eliminate redundant data from backup storage. While deduplication has been shown to provide as much as a 20:1 capacity efficiency gain, given the high ingest rates of data that are typical with backup jobs, deduplication can cause performance issues as it consumes processor cycles. This means that extra CPU horsepower must be invested in the disk backup device in order to maintain acceptable performance, or the deduplication process must be done after the backup completes, which requires additional disk capacity, which increases the  price premium of disk backup.

 

In short, to be feasible as a backup solution, disk has to include a lot of complicated processes like RAID for data redundancy, error checking for data integrity, and deduplication to narrow the price gap with tape. But these processes can severely eat into disk performance, making its speed disadvantage even worse.

 

Tape is relatively simple when it comes to writing data, and here simple means faster. As stated earlier, based on specifications, tape is faster per drive than disk and has less to do as it writes data so there is no degradation of that advantage. No RAID or, in most cases, deduplication takes place. Tape is already affordable, so there is no need to add data protection or capacity optimization techniques that consume I/O performance. If redundancy is needed, an extra copy can be made with little concern over cost.

 

A more accurate description of disk backup's performance advantage is that it's more "patient" than tape. When the input data stream is inadequate, the tape drive must slow down, wait for data, and then spin back up, while disk does not have to go through this process.

 

However, when tape is integrated with a small and simple disk area—that is, disk not encumbered with sophisticated data protection or capacity optimization techniques—tape delivers the best of both worlds: cost-effective and high-performing backups.

Myth 2: Disk Is Almost as Affordable as Tape

Several factors have led disk to be the first stage in many data-protection processes. The capacity per drive has continued to increase, bringing disk cost per GB down significantly, and techniques like compression and deduplication allow even more data to be stored in the same physical capacity. This combination plus the "patience" factor described above has led to the emergence of disk backup.

 

These capacity reduction techniques and the increased density per drive have led some disk-based backup vendors to claim cost parity with tape, or at least costs that are "close enough." But this ignores some significant cost factors.

 

For one, it ignores the fact that not all data is compressible and redundant enough to achieve the best-case deduplication ratio (~20:1). Among data that can't are rich media files or data used by applications with a high data-turnover rate, such as document scanning systems.

 

For another, it also ignores the major cost of upgrading disk backup systems. When a disk backup system fills up, the solution is to decrease the data retention times (not always feasible due to compliance and other requirements) or, more likely, to purchase an additional disk backup system. Since most systems are standalone units, there is a limit to internal upgrades, which means the cost of more capacity must include a whole new controller and power supply as well as disks. Even with scale-out storage systems, an additional node has to be purchased when more disk capacity is needed. While these systems more evenly spread out the capacity investment, they are not as price-competitive as a tape capacity upgrade, which simply requires buying another tape cartridge.

 

Tape cartridges are easy to add and deliver high capacity for backup. For example, LTO-5 tape media can deliver 1.5TB native and 3TB compressed capacity per cartridge for less than $100. No amount of deduplication or compression will match that $33 per TB any time soon! Of course, disk has its role, and the affordability of the platform is important. Integrating the two platforms—disk and tape—leverages the strengths of each and helps avoid very expensive capacity upgrades.

Myth 3: Tape Isn't As Resilient as Disk

One of the appeals of disk backup systems is their perceived reliability. Most disk backup systems use some form of RAID to protect from drive failure, and redundant power supplies and dual-ported connectivity are becoming increasingly common. The concern with disks, though, is the amount of risk exposure they can cause should one of these components fail.

 

For example, if the system experiences a drive failure, both backup and recovery performance plummets. If, during the RAID rebuild, a second drive fails, or a third under RAID 6, then 100 percent of the data is lost. While the chances of dual or triple drive failures may seem unlikely, the ramifications are so great that concern must be given.

 

Moreover, as drive capacities increase (something that disk-based backup systems are quick to adopt because of pressures to narrow the price gap with tape), the time it takes for the rebuild process to complete also increases. The longer the rebuild process, the more chances for the unlikely to become likely.

 

While most tape systems have redundant power and connectivity, they do not typically have a RAID style of data protection. Redundancy is most often achieved by making a secondary copy of the tape after the backup process completes. While possibly more time-consuming, this is a far more granular protection method. If one tape fails, none of the other tapes are affected. Data can still be read from the other tapes, with no performance impact either. Most importantly, if tape and disk are integrated, the disk system could create two identical tape copies simultaneously at very high speed, which eliminates the extra time involved with tape duplication.

Myth 4: Tape and Disk Must Be Separate

The introduction of disk-based backup solutions has created yet another silo of storage to be managed in the data center environment. It was functionally simpler for disk backup vendors to deliver a standalone platform than to integrate it with multiple tape libraries. Some vendors did try to come out with integrated tape and disk solutions, but those required that the existing tape solution be replaced. Since the service time of the typical tape library is longer than that of a disk system, most data centers were not prepared to replace their tape libraries and purchase new disk backup hardware in a single transaction. The result was that most customers purchased standalone disk backup systems. By convincing users that tape is "dead," vendors do not have to worry about integration. In reality, most users struggle with how to get disk and tape to work together.

 

This need for easy integration of the two storage types—disk and tape—has finally been met by backup virtualization. Also known as virtual tape library (or VTL), this approach allows disk and tape to work in tandem without having to constantly fine-tune the environment. Data is backed up at high speed into the disk-based cache in the VTL system, from where it is automatically backed up to legacy or new storage devices, including disk and tape from IBM and other vendors.

 

This approach leverages the best attributes of each platform and optimizes the backup environment. For example, a small but simple high-speed disk cache can be used to store inbound backup data, thereby reducing the backup window and freeing the server for its production tasks. From the VTL disk-based cache, as time allows, data can be simultaneously directed to a deduplication-capable disk system and a tape library.

 

Backup data is stored on the media most appropriate for its recall needs. The cache area can be used for high-speed recovery of the most recent copies of data. The data deduplication system can be used for medium-term recoveries of data. The tape system can be used for long-term retention of data. All of this can be managed across operating system platforms and backup application types, greatly simplifying the overall backup process.

Conclusion

Tape has strengths that are often overlooked because of concern over its shortcomings. Continued advancements in tape technology plus the capabilities brought forth by integrating tape and disk with a backup virtualization solution lead to a fast, reliable, and cost-effective solution that all data centers should consider.

 

 

Ed Ahl

Ed Ahl (This email address is being protected from spambots. You need JavaScript enabled to view it.) is Director of Business Development for Tributary Systems, Inc. His deep storage expertise spans multiple technologies and providers.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: