Businesses evolve and grow. Technologies advance. Consequently, organizations need dynamic infrastructures that allow them to quickly react to change.
Repetition breeds complacency. As a result, after decades of reiteration, some people no longer pay as much attention as they should to the old saying, "the only constant is change." But, for better or worse, the last couple of years, which carried us over a peak and into a low valley in the economy, have made the truth of that adage abundantly clear.
Many of the changes that businesses have undergone recently have, to say the least, not been entirely positive. Despite signs of "green shoots" in the economy, still fresh in our memories are screaming headlines about massive layoffs, crashing housing markets, large business losses, financial institution failures, banks that would have failed were it not for government bailouts, and corporate bankruptcy filings.
And as the economy recovers, we will confront yet more changes. Fortunately, most of them will be for the better, but there will, no doubt, be some bumps in the road ahead.
Even in the midst of an economic downturn, some organizations achieve triumphs in spite of bleak conditions, and others plant the seeds for their future success. For instance, some companies acquire businesses with market capitalizations that are perceived to have fallen below their long-term value. Other companies, rather than retrenching, take advantage of competitors' stumbles to aggressively capture greater market share. And still other companies do need to search for ways to cut back or gain efficiencies through improvements to their operations.
When times start to improve, everyone scrambles to take advantage of the emerging opportunities. Under these conditions, new market initiatives and even new lines of business may be launched as a result of increased optimism.
The upshot is that, through good times and bad, the old adage about change being constant, hackneyed though it may be, remains true. What's more, the pace of change is accelerating. And IT often finds itself at the leading edge of the effort to accommodate that rapid change, regardless of its source.
For example, from time to time, new technologies offer benefits that are too good to forgo. Significant growth in the quantity of information that businesses receive, generate, store, analyze, and report on requires new technologies and tactics for managing that information. Corporate mergers and acquisitions create a need for IT to integrate or replace systems. New business initiatives require new applications. Ubiquitous networking opens opportunities to improve efficiencies through increased automated supply-chain interaction. The list of IT transformation drivers is virtually endless.
No organization is immune to change. Therefore, the companies that achieve the greatest success are the nimble ones that can adapt to and take advantage of those changes as quickly and inexpensively as possible. Consequently, planning for a dynamic IT infrastructure that is capable of readily facilitating a highly agile enterprise should be a major objective of every IT department.
There is an important point to keep in mind as you plan for a dynamic infrastructure. Many of the significant external transformations that organizations will face down the road cannot be predicted with any great accuracy. Thus, it is not adequate to plan for only specific future states. Instead, you need an architecture that is sufficiently flexible to accommodate any business or technology requirement that might come your way.
Heterogeneous Data Bridges
One of the ways to maintain this flexibility is to incorporate versatile bridges within your IT infrastructure. These connecting pieces are, for the most part, platform-agnostic. A prime example is a heterogeneous data replicator.
In the strictest sense of the word, "replicator" is somewhat of a misnomer for many products in this category because data replicators copy the meaning of data but not necessarily its form. For example, apart from replicators included under the covers of high availability (HA) software, which are special cases of replication designed for a specific purpose, data replicators typically accommodate differences in data types and formats between the source and target databases.
Field-type mappings aren't usually visible to users, but sophisticated data replicators also facilitate transformations that accommodate differing user requirements. For example, a ZIP or postal code on the source database may be copied to the target, but the code might also be used to populate a "region" field on the target that doesn't exist on the source. A single date column on the source database may be split into year, month, and day columns on the target. American measures may be converted into metric measures. The list of possible transformations knows few limits.
The word "heterogeneous" in "heterogeneous data replicator" refers to another important capability of these tools. The source and target systems can run on different hardware and operating systems and use different database management systems.
Heterogeneous data replicators support enterprise flexibility by allowing applications to be integrated at the data level, without concern for the system platforms and without the need to code complex interfaces. The result is that, for example, after a corporate merger, the IT department can integrate the predecessor systems—or replace one or both of them—at a pace of its choice. In the meantime, the old systems can be run in parallel and share data transparently using the data replicator.
In addition, when a new business requirement arises, the company can choose a best-of-breed application to fulfill that requirement. A heterogeneous data replicator can then integrate the new application with other enterprise applications at the data level, even when the various applications run on disparate computing platforms.
Hardware Upgrades Without Downtime
The Capacity on Demand offerings on IBM Power Systems provide affordable scalability by allowing organizations to activate idle processors and memory resources either temporarily to accommodate spikes in system demand or permanently to accommodate ongoing business activity growth. Using Capacity on Demand, you pay for additional processors and memory resources only when you need them.
Capacity on Demand can defer the need for new hardware, but business evolution and growth, combined with technology advances too beneficial to pass up, will eventually leave you with little choice but to upgrade your physical servers. When this happens, the downtime required to complete the upgrade can be exceptionally costly, particularly for organizations that support 24x7 operations.
Data replication offers a way to avoid most of this downtime. The new hardware can be brought in before the old hardware is removed. The replicator can then copy all of the application and system data and objects from the old server to the new one. The IT department can then take as long as necessary to ensure that the new system is set up and configured properly, while the replicator keeps it fully synchronized with the old system until the switchover is complete.
Once the new hardware is in place and fully tested, the only downtime that users will experience during the upgrade is the time it takes to switch from the old to the new system, which may be as little as just a few minutes or seconds, depending on the environment and the software involved.
When upgrading, the old and new servers typically run on the same platform, although possibly using different versions of the operating system. Consequently, the data replicator used to support the upgrade does not have to support heterogeneous replication. Thus, because HA software includes homogeneous data replication as an inherent component, companies that already have HA software may be able to use it to keep the old and new systems synchronized during the upgrade.
Software Upgrades Without Downtime
When upgrading the operating system, the database management system, or an application, an organization doesn't necessarily have a second system available. Nonetheless, it is still possible to complete these sorts of upgrades with little or no downtime thanks to the partitioning capabilities of IBM Power Systems.
Each partition acts as a virtual server. Consequently, HA software or standalone replicators that can replicate between independent servers can also replicate between partitions.
When IT upgrades system or application software, the upgrade can be installed in one partition, while the old software continues to run as normal in another partition. While the new version is being installed, a replicator can keep the data in the two partitions synchronized. Then, when the new software is fully implemented and tested, users can be switched to the partition containing the upgraded software.
Database Reorganizations Without Downtime
Database reorganizations are the bane of many IT departments. Records deleted from a file are only logically deleted. They are physically deleted only when the database is reorganized. Thus, even if an organization's information content did not expand over time, its storage requirements would grow nonetheless.
But, of course, information content does grow. In many organizations, the combination of general business expansion and the growth in the types of data collected results in explosive increases in the volume of stored data. Regular database reorganizations are necessary to keep this mushrooming of storage requirements in check.
Some people suggest that because the per-terabyte cost of storage has dropped dramatically over the years, storage costs are less of a concern than they once were. There is some truth in this—although the declining per-terabyte cost is at least partly offset, and sometimes overwhelmed, by the exploding growth in data volumes—but the cost of storage devices is not the only concern.
Logically deleted records still exist as far as the physical storage device is concerned. When a query is issued against a database, all of the physical records, whether logically deleted or not, are brought into the buffers. The deleted records are then filtered out.
Processing these logically deleted records consumes both disk I/O and processor resources. Thus, as the proportion of logically deleted records in a database increases, application response times lengthen, possibly to an unacceptable level.
The optimal frequency of database reorganizations depends on both the frequency of record deletions and the cost of doing reorganizations. This latter factor often leads many companies to defer database reorganizations far longer than would be advisable in the absence of high reorganization costs. Yet the resulting database atrophy can threaten the agility of the organization.
The primary cost of file reorganizations is the downtime that has been traditionally required to perform them. In the past, it was necessary to shut down applications while the databases they use were being reorganized. As organizations moved toward around-the-clock operations to take advantage of opportunities afforded by globalization and the Internet, this downtime became that much more costly.
Fortunately, new tools that have been introduced into the market over the past few years make it possible to reorganize databases with only minimal downtime.
There are two generic ways to reorganize databases while applications remain active. Some vendors offer both methods as options within a single product. The mirrored-file method copies the file to be reorganized into a new library and reorganizes it as it is being copied. The copy is kept in sync with production changes until it can replace the production file. In contrast, the in-place, reorganize-while-active method reclaims space occupied by all deleted records, without the need to copy or synchronize files. And, unlike traditional reorganization functions, this newer reorganization technology can be performed with minimal impact on production operations.
Obviously, in-place reorganization requires less storage space than the mirrored file method, but it is not ideal in some circumstances. The mirrored-file method is typically used if triggers execute actions when records are added or deleted, referential integrity constraints are defined, journaling is used for data warehousing purposes, or you want to reorganize all members in a file at the same time.
For both the in-place and mirrored-file methods, the tool needs only a brief period of exclusive file use. This very short period—often just a few seconds—when applications will not be able to access the files being reorganized does not have to coincide with or immediately follow any of the other reorganization processes. Instead, it can be deferred until when it will have the least impact on the organization.
Dynamic Implies Resilient
Regardless of the occurrence of unplanned events, such as disasters and hardware failures, or planned events, such as scheduled maintenance, the organization has to be able to keep functioning. Consequently, a dynamic infrastructure must be one that can keep the business going no matter what the world throws at it.
HA software facilitates a resilient IT infrastructure that provides a high level of protection against downtime, both planned and unplanned. How much protection it offers depends on the HA topology.
HA software maintains up-to-date replicas of production servers. Theses replicas can be located in the same room, or they can be on opposite sides of the globe from each other. Or they can be in different partitions on the same system.
If the remote backup server is located far enough from the production server such that a single disaster will almost certainly not affect both servers, then this topology offers business resiliency in all circumstances. Even if a disaster destroys the primary data center, operations can still continue virtually uninterrupted. In addition, the remote replica server is also available to keep the business running through any planned maintenance events such as the upgrades or migrations described above.
A replica server can also help to make the IT infrastructure more scalable and versatile. For example, because the backup server contains a complete, current copy of all data, nightly backup tapes can be created there rather than on the production server. This removes the processing load from the production machine and eliminates the downtime that is often required when creating backup tapes.
It's not just backup jobs that can be moved to the replica server. Read-only functions, such as queries and batch reporting, can also be run there, thereby removing their processing and disk I/O load from the production systems.
Systems used as solely backup servers are often considerably underutilized until they are called upon to take on the production role, but this doesn't have to be so. You can use Power Systems partitioning to run multiple virtual servers on each of the two physical systems. That way, some of the partitions on each system can run production servers, while others back up the production servers running on the other machine.
By designing the architecture in this way, both systems can be lesser powered than would otherwise be required. When either system shuts down or must be taken offline for maintenance, the organization may not be able to run at full capacity on the single remaining system, but this drawback may be outweighed by lower hardware costs.
The above are merely examples of the technologies and tactics that can serve to make an IT infrastructure and, in turn, the organization more dynamic. The gamut of options is too large to describe in full here.
The point is, when you design or redesign your IT infrastructure, as you look at each component of that infrastructure, ask yourself, "Will this design and the technologies we use to implement it allow us to continue to operate with optimal efficiency and effectiveness if tomorrow looks considerably different from today?" If the answer is "no," it is usually a good idea to search for ways to change that answer to "yes."