High Availability: Dealing with Expectations

  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

A friend of mine named Bob has recently become an IS manager at a medium-sized manufacturing company in the Midwest. Last year, the company became aware that it needed a better means of maintaining the availability of its information systems, and Bob began looking at its options. The company had limited budget and limited expertise, but Bob had an overwhelming desire to learn as much as possible. He was pleasantly surprised to discover that the iSeries had a lot to offer.

IBM's HA Web Site

Bob started out by looking at IBM's iSeries High Availability Web site. IBM long ago recognized that a well-engineered profile for high availability (HA) was a good tactic for selling customers new hardware. Consequently, the iSeries' advanced engineering makes it an extremely viable HA server. However, because the iSeries' advanced architecture is somewhat unique in the industry, IBM also recognized that educating the customer base was going to be key to its success. Therefore, IBM's Web site is designed as an educational portal, and it even links to a number of ongoing seminars to enable customers to better familiarize themselves with the topic. In addition, IBM offers services--through its own in-house panel of experts and third-party service providers--to help customers approach, design, and implement an HA solution for the particular site.

Finally, the site provides a fairly complete list of iSeries options that were designed specifically for HA deployment. These options work in concert with Business Partner solutions to make the support profile as robust as possible. These options include clustering technologies like Independent Auxiliary Storage Pools (IASP), switched disks, and IBM's OptiConnect high speed communications channels.

Defining Availability

However, Bob quickly realized that he needed to jump-start his educational process from ground zero. He knew that defining the availability of his entire IT infrastructure was already a moving target: As new equipment and services had been added or upgraded in the past, there had been no particular plan for sustaining the systems over time, and as these devices and services had become more integrated, the overall scheduling of maintenance had become increasingly complex.

What Bob needed was a place to start, a strategy to attack the growing interdependencies of his company's systems. He quickly came to believe that, by starting with an examination of the iSeries--the system that was central to his overall data center--he'd have the best chance of using its HA profile as a model that could be extended to his other servers and equipment.

First, he needed to define exactly what "availability" meant to his organization and then learn precisely what his management required of the overall HA system. Here, IBM's Web site was also useful.

IBM defines availability in terms of system downtime: Anything that impacts the ability of the user to access or use the overall information system is impacting the system's availability. Identifying and minimizing the impact of those interruptions is what defines the availability of the system. It's not just mean-time between hardware failures of any particular piece of equipment, but the entire structure of machine and software interdependencies. This includes traffic peaks, scheduled jobs, storage, backup, and the need for identifying how quickly any particular part of the system should recover from an interruption.

IBM further subdivides the interruptions into downtime criteria: scheduled downtime for maintenance and unscheduled downtime in response to a calamity. A system has a high availability if its unscheduled downtime is very low and its scheduled downtime (for maintenance, etc.) is managed in a measurable and controlled fashion.

How Much Availability

The first question Bob had to answer was "How much availability do we need?" In his mind, Bob had initially thought that high availability would mean no downtime at all. Not so! HA is a means of managing scheduled downtime, and minimizing unscheduled interruptions. It is a system of management that uses devices and software to increase the control over scheduled and unscheduled downtime.

However, by itself, HA systems do not achieve the Holy Grail of "continuous availability." Continuous availability is a management tactic that has aims to reduce and eliminate both unscheduled downtime and scheduled maintenance. Continuous availability will use the services provided by an HA management system, but it will--depending upon the complexity of the systems involved--require a lot of redundant services and devices. For a mid-sized organization, these costs can be very restrictive.

Needless to say, from Bob's company's perspective, continuous availability--guaranteeing that the information system is always up, continuously, without interruptions--was overkill. Yet Bob learned that if an HA system is properly established, configured, and maintained, the Holy Grail of continuous availability had less importance to his management because the frequency of scheduled downtime could be well-planned by using an HA system. Bob was able to show his management graduated definitions of availability, which allowed them to identify more precisely what they required. These were the definitions:

  • Base Availability: Base availability systems are ready for immediate use, but they will experience both planned and unplanned outages.
  • High Availability: High availability systems include technologies that sharply reduce the number and duration of unplanned outages. Planned outages still occur, but the servers include facilities that reduce their impact.
  • Continuous Operations: Continuous operations environments use special technologies to ensure that there are no planned outages for upgrades, backups, or other maintenance activities. Frequently, companies use HA servers in these environments to reduce unplanned outages.
  • Continuous Availability: Continuous availability environments go a step further to ensure that there are no planned or unplanned outages. To achieve this level of availability, companies must use dual servers or clusters of redundant servers in which one server automatically takes over if another server goes down.
  • Disaster Tolerance: Disaster tolerance environments require remote systems to take over in the event of a site outage. The distance between systems is very important to insure no single catastrophic event affects both sites. However, the price for distance is loss of performance due to the latency time for the signal to travel the distance.

Overcoming Expectations: If It Ain't Broke...

One of the most frightening aspects for Bob in his new role as IS manager was the realization that he had a lot of systems that were marginally stable, but not a lot of expertise available to fix them if they broke. This predicament had led his management in the past to embrace the adage "If it ain't broke, don't fix it!" But, Bob wondered, how do you engineer an HA solution if some components of the system are inherently unstable? It was clear that Bob's management needed some re-education about which information systems were critical infrastructural pieces and which pieces needed specific reinvestment in order to be sustainable in an HA environment.

Bob interviewed each member of his management team to build a critical path of information flow so that he could identify where the focus of the company's HA system should lie. This was not as easy as it might sound on the onset: Some of the most important systems that were critical pieces of the company's information flow resided not on the iSeries, but on departmental servers or individual workstations. And although he had a natural desire to centralize these critical pieces of the puzzle, the scope of that kind of centralization project was much more ambitious than simply stabilizing the central system for HA.

Re-assessing the Role of the iSeries

Furthermore, Bob learned that in many cases his management did not understand how intricately enmeshed the various individual systems had become, with many smaller systems providing key information components to the larger system as a whole. This was further complicated by a number of PC-based legacy applications and by out-of-date and out-of-support interfaces to the supply chain systems that connected his organization to other organizations.

Even more insidious for Bob was the task of overcoming his management's prior beliefs that the iSeries was a legacy system itself. How could this be? Because the iSeries had never suffered a catastrophic meltdown--as some of the other servers in the organization had experienced--it had received relatively little attention in management meetings. Subsequently, as management personnel turned over, the management group had literally forgotten how critical the iSeries was to the overall availability of its information systems. Bob's task, as a new IS manager, was in part to re-educate his own management about why the iSeries was originally chosen and why it was still the best solution for their environment. Bob succeeded in this task through a series of brief presentations, but it was not easy, and he confided that some departmental managers only grudgingly accepted his message. Still, since the management required the HA implementation, they followed his lead because other options--converting to new servers, replacing applications, and retraining personnel--to achieve the required management goals were rapidly proving to appear much more expensive.

A Realistic Timeline

It was only after Bob had completed his first year's tenure as IS manager that he and his management team were truly ready to begin exploring the actual offerings that IBM and its Business Partners were selling. In hindsight, Bob admitted he might have been able to push the company harder had he been more knowledgeable about the HA capabilities of the iSeries. But, he told me, the state of the organization that he had inherited--and the previous attitudes of its management--would still have required a considerable amount of time to figure out. Reaching a consensus with the management team was not a trivial process, but it had allowed them the opportunity to realistically identify where consolidation of some services onto the iSeries would actually save the company money while leading them to an HA architecture.

No Quick Fix

At this writing, Bob's team is now analyzing how the various offerings from IBM and its Business Partners will best make sense to their desired HA profile. They are still learning and have yet to decide on a particular solution from a single vendor. But they feel they've gotten the basic research done, and they've got a good schematic of what they want to achieve. It's not the continuous availability solution that management had originally envisioned, but until their organization better consolidates, Bob's proposed designs are leading the organization to a more secure environment at a significantly lower cost.

"If anyone had told me it was going to take a year to reach this point, I would have balked," Bob told me. "But HA isn't just slapping in a new system. With so many systems interacting, it's engineering, it's practical knowledge, and it's a little bit of rocket science too!"

So what has Bob learned? "HA is definitely where we need to be!" he said. "The company was rapidly falling behind in implementing new projects, but not because it wasn't interested in moving forward. It was just that things had grown so complex--with so many different servers and services--that our momentum was constantly being compromised by small failures or interruptions in service. We'd spend all our time trying to fix the small gogs and forget that we had to move the whole organization forward. HA is going to help us overcome that mode of operation. It will take some time, but once we've got the basic plan implemented, we'll be able to really move the company forward toward its productivity goals."

Thomas M. Stockwell is Editor in Chief of MC Press, LLP.