Globalization, the Internet, competitive pressures, and higher consumer expectations have significantly increased the need for dependable 24x7 information systems in businesses, as well as in not-for-profit and government organizations. Fortunately, a number of products, from a variety of vendors, provide the means to improve the availability of data and applications, but these products alone are not enough.
Information technology is complex. The failure of any one of thousands of hardware and software components or the failure of the link between those components may be sufficient to shut down an application—or possibly all of an organization's systems. Thus, ensuring that none of the pieces in the high availability puzzle are missed is a demanding task that requires thorough analysis, rigorous planning, effective education, and expert execution of the plans.
The preceding paragraph refers to only hardware and software failures, but there are many more potential causes of data and application downtime. The mistake that organizations often make when considering their high availability requirements is to equate reliability and availability. If the two were equivalent, organizations would not need to do much to ensure reasonably high availability. It would occur naturally. Reliability refers to the mean time between failure (MTBF) of a system or a piece of hardware or software. Because information technology, particularly hardware, is now so reliable, only organizations with the severest of availability requirements are likely to spend significant time worrying about its failure.
Unfortunately, reliability and availability are not synonyms. Hardware and software failure are only two of many causes of downtime, and they're far from the most serious. Disasters such as fires, floods, hurricanes, earthquakes, ice storms, and lightning strikes can also disrupt operations for hours or even days at a time, yet these too are infrequent events. Nevertheless, there is one cause of downtime that is frequent and unavoidable—planned maintenance. On a regular basis, organizations shut systems down to perform hardware and software upgrades, backup and reorganize databases, or undertake other vital maintenance. In the past, when systems ran only during "business hours," this wasn't a problem. Maintenance, along with the running of batch jobs, was undertaken during those off-hours. Today, the increased need for 24x7 operations has eliminated those "batch and maintenance windows" and, as a result, has sent organizations scrambling to find ways to perform necessary maintenance and run batch jobs without shutting down online operations.
Recognizing these differences between reliability and availability is the first step on the road to achieving higher availability. But avoid the urge to rush out and buy something labeled as high availability hardware or software immediately upon coming to that realization. Fulfilling your availability objectives requires considerably more work than that, including analyzing the requirements and designing, planning, implementing, validating, sustaining, and managing the solution.
Different companies have different business demands. A banking application that runs Internet-based services may process transactions that can each represent thousands or even millions of dollars—transactions that originate electronically and that, at least initially, have no reliable hardcopy backup. This sort of system must be available around the clock, and there is no margin for error when it comes to protecting the integrity of the data it processes. In contrast, a system that a single-shift manufacturer uses to record orders originally received on paper has far less stringent availability requirements.
Availability needs differ not only in magnitude, but also in their nature. Organizations must consider two generic types of requirements: data integrity and application availability. For some organizations, particularly those that process high-value transactions entirely electronically, data integrity is absolutely critical. They cannot afford to lose a single transaction or allow a single piece of data to become irrecoverably corrupted for any reason. For other companies, such as those that process low-value transactions with accompanying paper trails, some data loss may be acceptable (as long as the loss can be detected) because the errant data can be recreated.
There's also a range of application availability requirements. For some applications, such as those that support high transaction volumes for around-the-clock, global operations or Web-based retail sites, any downtime, whether planned or unplanned, is costly. For other applications, such as low-value, back-office administrative systems, off-hours downtime is not an issue, and even the occasional hour of downtime during business hours will not have a significant impact on the organization.
Recognizing these differences is important because investments in higher availability follow a law of diminishing returns. For an organization that, to date, has done nothing to address availability, the cost of eliminating the first hour of downtime is orders of magnitude less than the cost of eliminating the last hour of downtime. Likewise, the cost to protect against the loss of even a single transaction in the event of a disaster is typically somewhat higher than the availability costs that you will have to incur if you are willing to accept a few lost transactions in those rare circumstances. Almost all organizations can justify taking the most basic steps toward enhanced availability. On the other hand, only organizations that place a particularly high value on each hour of operations will achieve a return on the investment required to almost totally eliminate downtime and lost data.
Applying this requirements analysis at the enterprise level will generally lead to suboptimal returns on your availability investments. Consider a major online retailer that processes millions of dollars of orders an hour and that operates in a very competitive environment. Eliminating as little as one hour of Web site downtime per year might generate a net benefit of hundreds of thousands or possibly more than one million dollars. The company would, therefore, be justified in spending a considerable sum to protect the systems that support its online retail operations. On the other hand, the unavailability of its human resources system for an hour—particularly in the middle of the night or on a weekend—would result in little or no financial loss. If the company made the same investment in availability for all of its systems, it would either spend too much to protect its human resources application or too little to protect its Web retailing application. Thus, it's important to perform the analysis at a level of detail such that the availability requirements are consistent within the unit under investigation.
Don't restrict the analysis to solely business needs. Also inventory your information technology and the interconnections between individual pieces of hardware and software. Consider which applications will be affected by the unavailability of any of those technologies. At the same time, document hardware and software maintenance schedules and indicate how each type of maintenance will affect application and data availability. This information will be important when it comes to selecting availability products because you must be certain that they will support your hardware, software, and existing processes.
The design phase of a high availability project uses the results of the planning phase to sketch out a blueprint for an architecture that will fulfill your objectives. Because this blueprint must be shaped by what is possible given today's technologies, at this stage you will also have to evaluate the availability products on offer by the various vendors and map them against both your hardware and software environment and your availability objectives. Because this evaluation must be completed as part of the design activities, the final selection of the products that will be used is typically made in this phase.
The result of the planning phase is a detailed map of how you will get from your current level of availability to the one identified in the analysis phase and scoped out in the design phase. The planning phase then draws on this and follows a rigorous project management methodology to produce a detailed implementation plan that describes the following:
- Implementation objectives—Restate the objectives that were established in the analysis phase. These represent your desired availability endpoints. The success or failure of the implementation phase (or, more likely, phases) will be measured against these objectives.
- Implementation approach—Identify the high availability products that will be employed and determine whether they will be installed and implemented using internal resources, external resources, or a combination of the two. If the availability solution will be implemented in multiple phases, map out those phases such that, wherever possible, the greatest benefits will be generated earliest.
- Key implementation assumptions—Spell out any assumptions that are critical to the success of the project, such as the availability of key resources or technologies.
- Project organization, with specific roles, responsibilities, and activities—Use your normal project management methodology to define all aspects of the high availability implementation project.
- Key dependencies—Identify any tasks that are dependent on the completion of other tasks.
- Project schedule, with key milestones—Set out a timeline for all activities with milestones against which the project schedule can be tested throughout the project, allowing corrective action to be taken as required should the schedule begin to slip.
- Education requirements—Implementing and then managing a high availability infrastructure inevitably requires skills that didn't previously exist in the organization. Identify those skills and develop an education plan that will successfully build them.
If your implementation phase is completed successfully, at the end you will have a high availability solution that protects your data and applications to the level specified in the analysis phase. Getting to that point involves installing the new availability technologies, configuring the software, integrating the administrative processes required by the new technologies into your other business and IT processes, and then testing the solution to ensure that it has been installed correctly and will perform according to the specifications.
During this phase, you should develop an Availability Operations Handbook that describes the ongoing availability management processes. It may not be necessary to develop the handbook from scratch. Instead, you may be able to start with a handbook provided by your high availability vendor and customize it for your environment. The finished handbook should include the following:
- Reference information such as contacts and important telephone numbers, along with an inventory of the hardware and software protected by the high availability infrastructure
- A high-level diagram of the computing environment and the data integration topology
- Basic instructions for backing up data, starting and stopping the availability products, swapping the roles of the primary and backup systems, upgrading products, and applying fixes
- An operations schedule of daily, weekly and monthly activities, along with any other activities triggered by specific events
- An explanation of how problems will be supported internally and/or through the vendor, including a description of the escalation procedure to follow if a problem is not resolved in a timely fashion
The Availability Operations Manual should also include sections used to log the following on an ongoing basis:
- Issues that arise and how they were resolved
- Completed maintenance tasks
During the education phase, classroom, onsite, and/or online training gives employees the skills they need to manage, administer, and maintain the implemented solution on a day-to-day basis. Where appropriate, the vendor's off-the-shelf education offerings should be customized to address issues specific to your organization.
Effective teaching is a difficult skill to master, but it is essential if the education programs are to achieve the desired results. Therefore, a vendor's training capabilities, including the experience of their instructors and the quality of their courses, should be one of the factors you consider when selecting a high availability vendor.
What works in theory doesn't necessarily succeed in practice. After implementation, the full high availability solution must be validated against the stated objectives. That means more than just checking that data is being replicated to a backup site or that you can successfully switch users from a primary server to a backup—although those are certainly components of the validation procedure. In addition to ensuring that the technology works as specified, you must also validate that your employees have the necessary skills and that the defined processes are being executed properly and are having the desired effects.
During the validation phase, revisit the Availability Operations Handbook and verify that it accurately and completely documents the necessary processes. Test employees' reactions to a simulated disaster to ensure that they can fully recover operations within the specified time constraints. Run diagnostic tests to ensure that backup data accurately reflects production data. And test the switchover from the primary server to the backup to ensure that process works as expected.
As you perform all of these validation processes, document lessons learned from anything that goes wrong.
If you use external resources, such as professional services from your availability vendor, to perform this and/or the preceding phases of the availability project, the service provider will turn the new availability infrastructure and responsibility for its ongoing maintenance over to you at the end of the validation phase.
Your computing environment will change over time. Hardware, applications, and systems software will be upgraded from time to time. New data fields will be added to databases. Security processes, hardware, and software will change. The geographic dispersal of your information technology may also change. Any of these alterations may affect your high availability infrastructure and processes. Therefore, it's essential to upgrade, modify, and, most importantly, test your high availability solution on a regular basis to ensure that it continues to provide an adequate level of data and application availability.
In an evolving environment, there's a tendency to focus on changes in the technologies while ignoring the relevant processes. That's a mistake. On a regular basis, review the Availability Operations Handbook to ensure that the procedures it documents are still appropriate in the current technology environment.
It's not just the technology environment that changes over time; the business environment also changes. Evolving customer demands, the adoption of new lines of business, increasing competitive pressures, and other factors will inevitably alter your organization's availability requirements. Furthermore, high availability is not an all-or-nothing proposition. Your first availability project will likely not tackle all of your enterprise-wide needs. Thus, there will be an ongoing need to undertake new projects to improve availability. Consequently, the phased availability planning and implementation methodology described above will need to be repeated and managed on an ongoing basis.
Availability Doesn't Just Happen
If there is one lesson that should be taken from the preceding discussion, it is that availability doesn't just happen. You can't simply buy a product, plug it into your information technology architecture, and forget about it. Computing environments today are very complex, with thousands of points of failure and myriad maintenance requirements. Except in the very simplest of architectures, a single, unmanaged off-the-shelf product will not prevent all potential downtime.
Business environments are no less complex. Availability needs vary among organizations and even among departments and applications within a single organization. Consequently, to derive the highest possible returns from each availability investment, availability must be planned and managed in a way that takes into account these differing requirements.
Because of this technology and business environment complexity, protecting the availability of data and applications requires a comprehensive and rigorous methodology that includes meticulous analysis and planning, expert implementation, and thorough ongoing management.
Alan Arnold is President and COO of Vision Solutions. Prior to joining Vision in 2000, he was a senior technology executive and subject matter expert for IBM technology at Cap Gemini Ernst & Young U.S. LLC. Arnold is recognized as an expert in the field of managed availability technology. He has authored or co-authored five books on technology and business topics that have been published worldwide. He has also written numerous articles for some of the leading publications in the industry.