Sidebar

Automate System i Management and Optimization

System Administration
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times
Many System i shops unduly defer critical optimization functions because the tasks are complex and time-consuming. Automation can help.

 

Back in the early days of what was then called AS/400, when people said they had a 10 gigabyte machine, they were referring to DASD. Today, they are referring to system memory. The point is that this is not your father's AS/400 (or iSeries or System i), and the changes that have occurred have serious implications for system management and optimization.

 

As System i evolved to offer significantly more memory and DASD, not to mention radically faster processors in multi-way configurations, business applications grew to take advantage of the expanded resources. System i has always been a platform for robust, straightforward, and totally reliable ERP systems, but now it is assigned a much more varied workload, with massively increased transaction volumes generated by users who may be located anywhere in the world and who expect instant response.

 

Not only has the number of business applications run on System i grown, but their complexity has mushroomed as well. System i must now accommodate multiple technologies and languages, varied access methods, and some methodologies that are not always sympathetic to the platform.

 

System i and its predecessors have, correctly, been sold as easy to implement, manage, and use. Because they bought System i to achieve lower total cost of ownership, many companies, particularly small and medium-sized firms, have a limited number of people managing System i. In fact, many small shops assign all of the System i technical responsibilities to one person. What's more, that person may also perform tasks that would be done by administrative staff in a larger company.

 

The upshot is that the skills required just to harvest the information needed to identify System i issues and problems, let alone to address them, are typically scarce. Consequently, System i monitoring, analysis, and optimization tasks may be perpetually shunted to the bottom of the priorities list. In the past, as DASD filled up and processors became overburdened, the only alternative to hiring more staff to perform optimization tasks was to spend more money to add DASD and more or faster processors.

 

DASD is now inexpensive, so, rather than cleaning it up, why not just buy more? The problem is that as the volume of obsolete data grows, the bloat causes other problems. Applications slow down as they wade through data that is no longer relevant. In addition, it takes much longer to backup and recover, say, 500 gigabytes than 250 gigabytes.

Maximizing the Benefits

The value of optimization and tighter management of System i is clear, but where should you begin? Start with the tasks that will deliver the utmost impact, with the least effort. This article examines the following five areas that typically provide the greatest benefits:

·        Compression

·        Physical file reorganization

·        QSYS and IFS object clean-up

·        Logical file optimization

·        Data, CPU, and I/O usage monitoring

Compression

Compressing objects such as programs that are no longer used or that are used infrequently can release around 60 percent of their uncompressed space. In addition to storage savings, there are also benefits to be derived during backup and restore operations. Most back-up routines leave the operating system to compress programs before writing them to tape. When recovering from a disaster, programs that were uncompressed before being backed up are decompressed automatically as they are loaded onto disk, but programs that were compressed to start with aren't decompressed. This may save only, say, half a second per program, but if you compressed just 3,600 programs, system recovery times will be reduced by half an hour. In companies where downtime costs can run to hundreds of thousands or even millions of dollars per hour, that half-hour represents significant value.

 

The example of 3,600 programs is likely an understatement. On average, companies have 35,000 to 55,000 programs on their System i machines, a large portion of which are rarely or never used. Hence, your storage and recovery time savings will likely be considerably higher.

 

The most difficult part of compressing programs is determining which ones are not being used and can, therefore, be safely compressed. An analysis tool, as discussed below, can help with that.

Physical File Reorganization

Deleted records are logically deleted, but they continue to occupy space until you reorganize the file. In addition, because "deleted" records are brought into the I/O buffers and then filtered out during read operations, they slow down applications. Furthermore, deleted records are copied onto backup tapes. They are then reloaded onto disk should you need to perform a restore. Thus, by regularly reorganizing files, you can reclaim considerable storage space, as well as improve the performance of business applications and backup and restore operations.

 

Many organizations don't perform reorganizations as often as they should because it takes a long time to reorganize the exceptionally large files that are typical today. In the past, all applications accessing the file had to be shut down until the reorganization process completed. That is no longer true. System i includes reorganize-while-active capabilities. Nonetheless, because such reorganizations strain resources and maintain lengthy record locks, they must be restricted to periods when the other demands on the system are light. Yet, as they often take longer than the available maintenance windows, many companies still defer file reorganizations longer than is prudent. Fortunately, third-party reorganization solutions can overcome these impediments.

QSYS and IFS Object Clean-Up

A typical System i machine stores more than half a million objects. Many of them are never used. Deleting obsolete objects, possibly archiving them first, frees up considerable DASD.

 

In addition, obsolete objects will be included in data backups and restores, slowing down those processes. And if you use a high availability (HA) product, the obsolete objects will be replicated to the backup server, consuming space there as well. Once you delete the obsolete objects on your primary system, the HA replication process will automatically delete them from the backup server.

 

Finding obsolete objects is a little more complex for IFS objects than for QSYS objects because the Last Usage date is immediately updated when you use Navigator to view object properties, making it appear as if the object was recently used and, therefore, not obsolete. Specialized routines included in advanced third-party optimization products can overcome this problem. These products display objects that are truly unused and provide a procedure to archive them.

Logical File Optimization

Because of the indexes used to create views, logical files can be exceptionally large, and their use can consume considerable CPU cycles. Logical file optimization is, therefore, critical, but the necessary tasks can be very complex. First, you need an advanced analysis tool to spot problem areas. Even after you've identified the issues, unless there is an adequate downtime window available, you may have to switch users to a backup system while you perform the optimization tasks. Because logical file optimization involves index key sharing, you must then kick-start the operating system to perform sharing, where possible. It also involves managing the maintenance of the access paths—*IMMED or *REBLD—so, when taking advantage of the great gains here, you must also put in place an active monitor on index usage to reverse any access path maintenance changes in the unlikely event that they are again required by users on a regular basis.

 

The value to be derived from logical file optimization can be very high. Nonetheless, as with all things, the greatest benefits sometimes come only with much work, but an optimization tool that automates this process can dramatically reduce the complexity and workload.

Data, CPU, and I/O Usage Monitoring

It is an almost immutable law that processor and storage use will increase over time due to at least three factors: increases in business volumes, increases in the variety of data retained, and increases in the number and complexity of the business functions that are automated. To ensure that the IT infrastructure is sufficient to handle this growth and provide adequate application performance, you must monitor these trends. When doing so, it is important that you scrutinize the trees, not just the forest. In other words, you must have tools that allow you to isolate potential bottlenecks and deal with them before they become critical.

 

Monitoring data growth is not as easy as it may seem. Doing so using operating system commands alone involves the use of the DSPOBJD *ALL *ALL and DSPFD *ALL *ALL commands. In addition, separate commands are required to provide visibility into the IFS which is, typically, one enormous file. Plus, you'll need a host of custom queries to monitor week-to-week object growth. Trying to do all of this without a good set of tools is daunting.

 

It is also important to keep an eye on program-specific CPU usage. Often, you'll find that some of the least important applications consume the greatest volume of CPU and memory. If you know about them, you may be able to schedule these non-critical jobs during times when CPU demands are lower or restrict them by, for example, pool size.

 

Even if your System i machine has no DASD constraints or CPU bottlenecks, performance of some applications can be curtailed by I/O bandwidth limits. It is, therefore, important to be able to determine which tasks are consuming the most I/O bandwidth and, if possible, adjust their scheduling to avoid bottlenecks.

Automation

System i optimization and management tools fall into one of three categories: monitors, analyzers, and optimizers, with the majority occupying the most basic of the three categories, monitors. The use of the word "basic" is not meant to imply that these products offer little value. On the contrary, monitors deliver value by providing useful information about what is going on inside your System i.

 

Obviously, monitors don't create data out of thin air. The data already exists, and you can access it using standard operating system commands, but doing so is a cumbersome process and requires knowledge that isn't widespread.

 

The gathering of this information may be the first step, but it is only by analyzing the data that you can begin to identify areas where optimization is possible and valuable. Again, trying to do that manually is difficult and time-consuming. Analyzer products can automate most of the analysis process, eliminating much of the labor component.

 

For example, an analysis tool can gather data on each job's CPU usage over a period, allowing you to examine and drill down into the data. In doing so, you might be able to spot jobs that are consuming inordinate resources. Often, addressing just one offending job can release an enormous volume of resources.

 

Another example of functionality that an analysis product can provide is to gather information about all objects, strip out, say, just the save files, sort them by size, and check the last usage dates to highlight ones that are likely no longer required. After looking at the data in this way, many companies find exceptionally large save files that can be removed with no impact on operations.

 

Effective analysis is important, but you will not begin to derive value until you apply the results of that analysis to optimize and better manage your System i. These optimization tasks can be at least as complex and time-consuming as the monitoring and analysis functions. Thus, products that automate optimization tasks can provide tremendous value.

 

Among other functions, a good optimizer will monitor objects, find large, obsolete ones, and then help you to remove them safely, possibly also automatically archiving them to near-line or offline storage so you can access the object again should the need arise. An advanced optimizer can also store your organization's archiving policies and then automate those policies where appropriate.

 

Logical files offer another opportunity for optimizers to provide significant value. When you consider that it typically takes two I/Os to update a physical file, it should come as no surprise that it often takes five I/Os to update a logical file. Because some physical files have five, 10, or even 20 logical files overlaid on them, the background I/O for each additional record is massive—two I/Os for the actual physical record and then, say, 50 I/Os to keep the logical files up-to-date. On first glance, one might consider those I/Os to be a price worth paying to keep the logical files up-to-date, but in most organizations, many logical files are not being used and, therefore, offer no value.

 

A good optimizing product will identify access paths that have not been opened in, for example, 180 days and change the maintenance parameter for them to *REBLD instead of *IMMED. This retains the logical file, but its access path is removed and will only be rebuilt if someone uses it. The result of this simple optimization is usually a massive reduction in the storage required to maintain logical files. At the same time, this also reduces the I/O activity required to maintain the optimized files. (Note: The 180-day threshold quoted here is only an example. Most organizations will still find a number of logical files that fall beyond the threshold even when it is set as high as two years.)

 

Next, the optimizer will look at the remaining *IMMED logical files and determine whether they can be shared with others. Sharing is an inherent i5/OS feature, but the operating system cannot, for the most part, detect instances when logical-file sharing is appropriate. An advanced optimization product can identify these instances and force the operating system to do some further tidying up to save more space and further reduce the required I/Os.

 

An often-realized benefit of this logical file optimization is that, again, backup and recovery tasks get a tremendous boost. During a recovery operation, waiting for access path rebuilds and repairs can consume considerable precious time as you work feverishly to bring the business back online. Because you there will be many fewer access paths, this one option alone might shave many hours off your current recovery times.

 

The foregoing is only a sampling of the optimization possibilities available in the toolkits available on the market today. Space limitations prohibit expanding on others here, but, briefly, they include, among others, automated archiving and purging of obsolete data and objects and low-impact, while-active file reorganizations that can be automatically divided into smaller subtasks that are scheduled to run during slow periods.

Choosing a Monitoring, Analysis, and Optimization Toolkit

Once you've determined that the types of tools described above can benefit your organization, what factors should you consider in evaluating the available products? Price will be an obvious consideration, but what matters most is ROI.

 

Comprehensiveness is an important determinant of ROI. This article had room to discuss only a few possible System i optimizations. The issues presented above are common, but there are many more. The optimization issue that is most salient in one organization may be unimportant in another. It is, therefore, important to have a tool, or a set of tools, that will report on the health of your whole System i, examining as many factors as possible.

 

The previous paragraph begs another question. Should you buy a single comprehensive tool, or should you assemble a kit of various tools that, in total, will accomplish the same thing? Assuming it provides all of the necessary functionality, a single tool will be more productive. Using a collection of tools requires learning each tool separately, and you must switch between tools when you use them. In contrast, a single comprehensive product will provide a common user interface that requires only one learning curve, with no need to move back and forth between different tools.

 

It is not enough to just know which optimizations need to be done. You have to actually perform those optimizations before you can begin to derive value. Some of those tasks can be very complex and time-consuming. To receive greatest value out of the optimization tools, choose ones that provide a high level of automation. After all, your organization likely adopted System i because of its ease of implementation, management, and use. Optimization should enhance, not negate, those System i benefits.

Andy Kowalski
Andy Kowalski is senior product manager with Vision Solutions Inc. He has a Bachelor's Degree in Computer Science and over 20 years experience in IBM midrange systems from System/38 to System I, specializing in data resiliency, availability, and systems and database management technologies. He has worked for customers, partners, and software vendors both in Europe and North America . He has a good technical and business knowledge of the System i market space and is an advisor, project manager, and solution architect on many implementation projects from SMB to enterprise. One of Andy's skills is his ability to explain complex technical topics in a practical and easy-to-understand way to any audience. Andy's role at Vision Solutions Inc. is to help define and implement product strategy for Vision's portfolio of resiliency, availability, systems optimization, and database management technologies. Contact Andy at This email address is being protected from spambots. You need JavaScript enabled to view it..

 

Many System i shops unduly defer critical optimization functions because the tasks are complex and time-consuming. Automation can help.

 

Back in the early days of what was then called AS/400, when people said they had a 10 gigabyte machine, they were referring to DASD. Today, they are referring to system memory. The point is that this is not your father's AS/400 (or iSeries or System i), and the changes that have occurred have serious implications for system management and optimization.

 

As System i evolved to offer significantly more memory and DASD, not to mention radically faster processors in multi-way configurations, business applications grew to take advantage of the expanded resources. System i has always been a platform for robust, straightforward, and totally reliable ERP systems, but now it is assigned a much more varied workload, with massively increased transaction volumes generated by users who may be located anywhere in the world and who expect instant response.

 

Not only has the number of business applications run on System i grown, but their complexity has mushroomed as well. System i must now accommodate multiple technologies and languages, varied access methods, and some methodologies that are not always sympathetic to the platform.

 

System i and its predecessors have, correctly, been sold as easy to implement, manage, and use. Because they bought System i to achieve lower total cost of ownership, many companies, particularly small and medium-sized firms, have a limited number of people managing System i. In fact, many small shops assign all of the System i technical responsibilities to one person. What's more, that person may also perform tasks that would be done by administrative staff in a larger company.

 

The upshot is that the skills required just to harvest the information needed to identify System i issues and problems, let alone to address them, are typically scarce. Consequently, System i monitoring, analysis, and optimization tasks may be perpetually shunted to the bottom of the priorities list. In the past, as DASD filled up and processors became overburdened, the only alternative to hiring more staff to perform optimization tasks was to spend more money to add DASD and more or faster processors.

 

DASD is now inexpensive, so, rather than cleaning it up, why not just buy more? The problem is that as the volume of obsolete data grows, the bloat causes other problems. Applications slow down as they wade through data that is no longer relevant. In addition, it takes much longer to backup and recover, say, 500 gigabytes than 250 gigabytes.

Maximizing the Benefits

The value of optimization and tighter management of System i is clear, but where should you begin? Start with the tasks that will deliver the utmost impact, with the least effort. This article examines the following five areas that typically provide the greatest benefits:

·        Compression

·        Physical file reorganization

·        QSYS and IFS object clean-up

·        Logical file optimization

·        Data, CPU, and I/O usage monitoring

Compression

Compressing objects such as programs that are no longer used or that are used infrequently can release around 60 percent of their uncompressed space. In addition to storage savings, there are also benefits to be derived during backup and restore operations. Most back-up routines leave the operating system to compress programs before writing them to tape. When recovering from a disaster, programs that were uncompressed before being backed up are decompressed automatically as they are loaded onto disk, but programs that were compressed to start with aren't decompressed. This may save only, say, half a second per program, but if you compressed just 3,600 programs, system recovery times will be reduced by half an hour. In companies where downtime costs can run to hundreds of thousands or even millions of dollars per hour, that half-hour represents significant value.

 

The example of 3,600 programs is likely an understatement. On average, companies have 35,000 to 55,000 programs on their System i machines, a large portion of which are rarely or never used. Hence, your storage and recovery time savings will likely be considerably higher.

 

The most difficult part of compressing programs is determining which ones are not being used and can, therefore, be safely compressed. An analysis tool, as discussed below, can help with that.

Physical File Reorganization

Deleted records are logically deleted, but they continue to occupy space until you reorganize the file. In addition, because "deleted" records are brought into the I/O buffers and then filtered out during read operations, they slow down applications. Furthermore, deleted records are copied onto backup tapes. They are then reloaded onto disk should you need to perform a restore. Thus, by regularly reorganizing files, you can reclaim considerable storage space, as well as improve the performance of business applications and backup and restore operations.

 

Many organizations don't perform reorganizations as often as they should because it takes a long time to reorganize the exceptionally large files that are typical today. In the past, all applications accessing the file had to be shut down until the reorganization process completed. That is no longer true. System i includes reorganize-while-active capabilities. Nonetheless, because such reorganizations strain resources and maintain lengthy record locks, they must be restricted to periods when the other demands on the system are light. Yet, as they often take longer than the available maintenance windows, many companies still defer file reorganizations longer than is prudent. Fortunately, third-party reorganization solutions can overcome these impediments.

QSYS and IFS Object Clean-Up

A typical System i machine stores more than half a million objects. Many of them are never used. Deleting obsolete objects, possibly archiving them first, frees up considerable DASD.

 

In addition, obsolete objects will be included in data backups and restores, slowing down those processes. And if you use a high availability (HA) product, the obsolete objects will be replicated to the backup server, consuming space there as well. Once you delete the obsolete objects on your primary system, the HA replication process will automatically delete them from the backup server.

 

Finding obsolete objects is a little more complex for IFS objects than for QSYS objects because the Last Usage date is immediately updated when you use Navigator to view object properties, making it appear as if the object was recently used and, therefore, not obsolete. Specialized routines included in advanced third-party optimization products can overcome this problem. These products display objects that are truly unused and provide a procedure to archive them.

Logical File Optimization

Because of the indexes used to create views, logical files can be exceptionally large, and their use can consume considerable CPU cycles. Logical file optimization is, therefore, critical, but the necessary tasks can be very complex. First, you need an advanced analysis tool to spot problem areas. Even after you've identified the issues, unless there is an adequate downtime window available, you may have to switch users to a backup system while you perform the optimization tasks. Because logical file optimization involves index key sharing, you must then kick-start the operating system to perform sharing, where possible. It also involves managing the maintenance of the access paths—*IMMED or *REBLD—so, when taking advantage of the great gains here, you must also put in place an active monitor on index usage to reverse any access path maintenance changes in the unlikely event that they are again required by users on a regular basis.

 

The value to be derived from logical file optimization can be very high. Nonetheless, as with all things, the greatest benefits sometimes come only with much work, but an optimization tool that automates this process can dramatically reduce the complexity and workload.

Data, CPU, and I/O Usage Monitoring

It is an almost immutable law that processor and storage use will increase over time due to at least three factors: increases in business volumes, increases in the variety of data retained, and increases in the number and complexity of the business functions that are automated. To ensure that the IT infrastructure is sufficient to handle this growth and provide adequate application performance, you must monitor these trends. When doing so, it is important that you scrutinize the trees, not just the forest. In other words, you must have tools that allow you to isolate potential bottlenecks and deal with them before they become critical.

 

Monitoring data growth is not as easy as it may seem. Doing so using operating system commands alone involves the use of the DSPOBJD *ALL *ALL and DSPFD *ALL *ALL commands. In addition, separate commands are required to provide visibility into the IFS which is, typically, one enormous file. Plus, you'll need a host of custom queries to monitor week-to-week object growth. Trying to do all of this without a good set of tools is daunting.

 

It is also important to keep an eye on program-specific CPU usage. Often, you'll find that some of the least important applications consume the greatest volume of CPU and memory. If you know about them, you may be able to schedule these non-critical jobs during times when CPU demands are lower or restrict them by, for example, pool size.

 

Even if your System i machine has no DASD constraints or CPU bottlenecks, performance of some applications can be curtailed by I/O bandwidth limits. It is, therefore, important to be able to determine which tasks are consuming the most I/O bandwidth and, if possible, adjust their scheduling to avoid bottlenecks.

Automation

System i optimization and management tools fall into one of three categories: monitors, analyzers, and optimizers, with the majority occupying the most basic of the three categories, monitors. The use of the word "basic" is not meant to imply that these products offer little value. On the contrary, monitors deliver value by providing useful information about what is going on inside your System i.

 

Obviously, monitors don't create data out of thin air. The data already exists, and you can access it using standard operating system commands, but doing so is a cumbersome process and requires knowledge that isn't widespread.

 

The gathering of this information may be the first step, but it is only by analyzing the data that you can begin to identify areas where optimization is possible and valuable. Again, trying to do that manually is difficult and time-consuming. Analyzer products can automate most of the analysis process, eliminating much of the labor component.

 

For example, an analysis tool can gather data on each job's CPU usage over a period, allowing you to examine and drill down into the data. In doing so, you might be able to spot jobs that are consuming inordinate resources. Often, addressing just one offending job can release an enormous volume of resources.

 

Another example of functionality that an analysis product can provide is to gather information about all objects, strip out, say, just the save files, sort them by size, and check the last usage dates to highlight ones that are likely no longer required. After looking at the data in this way, many companies find exceptionally large save files that can be removed with no impact on operations.

 

Effective analysis is important, but you will not begin to derive value until you apply the results of that analysis to optimize and better manage your System i. These optimization tasks can be at least as complex and time-consuming as the monitoring and analysis functions. Thus, products that automate optimization tasks can provide tremendous value.

 

Among other functions, a good optimizer will monitor objects, find large, obsolete ones, and then help you to remove them safely, possibly also automatically archiving them to near-line or offline storage so you can access the object again should the need arise. An advanced optimizer can also store your organization's archiving policies and then automate those policies where appropriate.

 

Logical files offer another opportunity for optimizers to provide significant value. When you consider that it typically takes two I/Os to update a physical file, it should come as no surprise that it often takes five I/Os to update a logical file. Because some physical files have five, 10, or even 20 logical files overlaid on them, the background I/O for each additional record is massive—two I/Os for the actual physical record and then, say, 50 I/Os to keep the logical files up-to-date. On first glance, one might consider those I/Os to be a price worth paying to keep the logical files up-to-date, but in most organizations, many logical files are not being used and, therefore, offer no value.

 

A good optimizing product will identify access paths that have not been opened in, for example, 180 days and change the maintenance parameter for them to *REBLD instead of *IMMED. This retains the logical file, but its access path is removed and will only be rebuilt if someone uses it. The result of this simple optimization is usually a massive reduction in the storage required to maintain logical files. At the same time, this also reduces the I/O activity required to maintain the optimized files. (Note: The 180-day threshold quoted here is only an example. Most organizations will still find a number of logical files that fall beyond the threshold even when it is set as high as two years.)

 

Next, the optimizer will look at the remaining *IMMED logical files and determine whether they can be shared with others. Sharing is an inherent i5/OS feature, but the operating system cannot, for the most part, detect instances when logical-file sharing is appropriate. An advanced optimization product can identify these instances and force the operating system to do some further tidying up to save more space and further reduce the required I/Os.

 

An often-realized benefit of this logical file optimization is that, again, backup and recovery tasks get a tremendous boost. During a recovery operation, waiting for access path rebuilds and repairs can consume considerable precious time as you work feverishly to bring the business back online. Because you there will be many fewer access paths, this one option alone might shave many hours off your current recovery times.

 

The foregoing is only a sampling of the optimization possibilities available in the toolkits available on the market today. Space limitations prohibit expanding on others here, but, briefly, they include, among others, automated archiving and purging of obsolete data and objects and low-impact, while-active file reorganizations that can be automatically divided into smaller subtasks that are scheduled to run during slow periods.

Choosing a Monitoring, Analysis, and Optimization Toolkit

Once you've determined that the types of tools described above can benefit your organization, what factors should you consider in evaluating the available products? Price will be an obvious consideration, but what matters most is ROI.

 

Comprehensiveness is an important determinant of ROI. This article had room to discuss only a few possible System i optimizations. The issues presented above are common, but there are many more. The optimization issue that is most salient in one organization may be unimportant in another. It is, therefore, important to have a tool, or a set of tools, that will report on the health of your whole System i, examining as many factors as possible.

 

The previous paragraph begs another question. Should you buy a single comprehensive tool, or should you assemble a kit of various tools that, in total, will accomplish the same thing? Assuming it provides all of the necessary functionality, a single tool will be more productive. Using a collection of tools requires learning each tool separately, and you must switch between tools when you use them. In contrast, a single comprehensive product will provide a common user interface that requires only one learning curve, with no need to move back and forth between different tools.

 

It is not enough to just know which optimizations need to be done. You have to actually perform those optimizations before you can begin to derive value. Some of those tasks can be very complex and time-consuming. To receive greatest value out of the optimization tools, choose ones that provide a high level of automation. After all, your organization likely adopted System i because of its ease of implementation, management, and use. Optimization should enhance, not negate, those System i benefits.

Andy Kowalski
Andy Kowalski is senior product manager with Vision Solutions Inc. He has a Bachelor's Degree in Computer Science and over 20 years experience in IBM midrange systems from System/38 to System I, specializing in data resiliency, availability, and systems and database management technologies. He has worked for customers, partners, and software vendors both in Europe and North America . He has a good technical and business knowledge of the System i market space and is an advisor, project manager, and solution architect on many implementation projects from SMB to enterprise. One of Andy's skills is his ability to explain complex technical topics in a practical and easy-to-understand way to any audience. Andy's role at Vision Solutions Inc. is to help define and implement product strategy for Vision's portfolio of resiliency, availability, systems optimization, and database management technologies. Contact Andy at akowalski@visionsolutions.com.

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

RESOURCE CENTER

  • WHITE PAPERS

  • WEBCAST

  • TRIAL SOFTWARE

  • Mobile Computing and the IBM i

    SB ASNA PPL 5450Mobile computing is rapidly maturing into a solid platform for delivering enterprise applications. Many IBM i shops today are realizing that integrating their IBM i with mobile applications is the fast path to improved business workflows, better customer relations, and more responsive business reporting.

    This ASNA whitepaper takes a look at mobile computing for the IBM i. It discusses the different ways mobile applications may be used within the enterprise and how ASNA products solve the challenges mobile presents. It also presents the case that you already have the mobile programming team your projects need: that team is your existing RPG development team!

    Get your copy today!

  • Automate IBM i Operations using Wireless Devices

    DDL SystemsDownload the technical whitepaper on MANAGING YOUR IBM i WIRELESSLY and (optionally) register to download an absolutely FREE software trail. This whitepaper provides an in-depth review of the native IBM i technology and ACO MONITOR's advanced two-way messaging features to remotely manage your IBM i while in or away from the office. Notify on-duty personnel of system events and remotely respond to complex problems (via your Smartphone) before they become critical-24/7. Problem solved!

    Order your copy here.

  • DR Strategy Guide from Maxava: Brand New Edition - now fully updated to include Cloud!

    SB Maxava PPL 5476PRACTICAL TOOLS TO IMPLEMENT DISASTER RECOVERY IN YOUR IBM i ENVIRONMENT

    CLOUD VS. ON-PREMISE?
    - COMPREHENSIVE CHECKLISTS
    - RISK COST CALCULATIONS
    - BUSINESS CASE FRAMEWORK
    - DR SOLUTIONS OVERVIEW
    - RFP BUILDER
    Download your free copy of DR Strategy Guide for IBM i from Maxava today.

     

  • White Paper: Node.js for Enterprise IBM i Modernization

    SB Profound WP 5539

    If your business is thinking about modernizing your legacy IBM i (also known as AS/400 or iSeries) applications, you will want to read this white paper first!

    Download this paper and learn how Node.js can ensure that you:
    - Modernize on-time and budget - no more lengthy, costly, disruptive app rewrites!
    - Retain your IBM i systems of record
    - Find and hire new development talent
    - Integrate new Node.js applications with your existing RPG, Java, .Net, and PHP apps
    - Extend your IBM i capabilties to include Watson API, Cloud, and Internet of Things


    Read Node.js for Enterprise IBM i Modernization Now!

     

  • 2020 IBM i Marketplace Survey Results

    HelpSystems

    This year marks the sixth edition of the popular IBM i Marketplace Survey Results. Each year, HelpSystems sets out to gather data about how businesses use the IBM i platform and the IT initiatives it supports. Year over year, the survey has begun to reveal long-term trends that give insight into the future of this trusted technology.

    More than 500 IBM i users from around the globe participated in this year’s survey, and we’re so happy to share the results with you. We hope you’ll find the information interesting and useful as you evaluate your own IT projects.

  • AIX Security Basics eCourse

    Core Security

    With so many organizations depending on AIX day to day, ensuring proper security and configuration is critical to ensure the safety of your environment. Don’t let common threats put your critical AIX servers at risk. Avoid simple mistakes and start to build a long-term plan with this AIX Security eCourse. Enroll today to get easy to follow instructions on topics like:

    • Removing extraneous files
    • Patching systems efficiently
    • Setting and validating permissions
    • Managing service considerations
    • Getting overall visibility into your networks

     

  • Developer Kit: Making a Business Case for Modernization and Beyond

    Profound Logic Software, Inc.

    Having trouble getting management approval for modernization projects? The problem may be you're not speaking enough "business" to them.

    This Developer Kit provides you study-backed data and a ready-to-use business case template to help get your very next development project approved!

  • What to Do When Your AS/400 Talent Retires

    HelpSystemsIT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators is small.

    This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn:

    • Why IBM i skills depletion is a top concern
    • How leading organizations are coping
    • Where automation will make the biggest impact

     

  • IBM i Resources Retiring?

    SB HelpSystems WC GenericLet’s face it: IBM i experts and RPG programmers are retiring from the workforce. Are you prepared to handle their departure?
    Our panel of IBM i experts—Chuck Losinski, Robin Tatam, Richard Schoen, and Tom Huntington—will outline strategies that allow your company to cope with IBM i skills depletion by adopting these strategies that allow you to get the job done without deep expertise on the OS:
    - Automate IBM i processes
    - Use managed services to help fill the gaps
    - Secure the system against data loss and viruses
    The strategies you discover in this webinar will help you ensure that your system of record—your IBM i—continues to deliver a powerful business advantage, even as staff retires.

     

  • Backup and Recovery Considerations for Security Data and Encrypted Backups

    SB PowerTech WC GenericSecurity expert Carol Woodbury is joined by Debbie Saugen. Debbie is an expert on IBM i backup and recovery, disaster recovery, and high availability, helping IBM i shops build and implement effective business continuity plans.
    In today’s business climate, business continuity is more important than ever. But 83 percent of organizations are not totally confident in their backup strategy.
    During this webinar, Carol and Debbie discuss the importance of a good backup plan, how to ensure you’re backing up your security information, and your options for encrypted back-ups.

  • Profound.js: The Agile Approach to Legacy Modernization

    SB Profound WC GenericIn this presentation, Alex Roytman and Liam Allan will unveil a completely new and unique way to modernize your legacy applications. Learn how Agile Modernization:
    - Uses the power of Node.js in place of costly system re-writes and migrations
    - Enables you to modernize legacy systems in an iterative, low-risk manner
    - Makes it easier to hire developers for your modernization efforts
    - Integrates with Profound UI (GUI modernization) for a seamless, end-to-end legacy modernization solution

     

  • Data Breaches: Is IBM i Really at Risk?

    SB PowerTech WC GenericIBM i is known for its security, but this OS could be more vulnerable than you think.
    Although Power Servers often live inside the safety of the perimeter firewall, the risk of suffering a data leak or data corruption remains high.
    Watch noted IBM i security expert Robin Tatam as he discusses common ways that this supposedly “secure” operating system may actually be vulnerable and who the culprits might be.

    Watch the webinar today!

     

  • Easy Mobile Development

    SB Profound WC GenericWatch this on-demand webinar and learn how to rapidly and easily deploy mobile apps to your organization – even when working with legacy RPG code! IBM Champion Scott Klement will demonstrate how to:
    - Develop RPG applications without mobile development experience
    - Deploy secure applications for any mobile device
    - Build one application for all platforms, including Apple and Android
    - Extend the life and reach of your IBM i (aka iSeries, AS400) platform
    You’ll see examples from customers who have used our products and services to deliver the mobile applications of their dreams, faster and easier than they ever thought possible!

     

  • Profound UI: Unlock True Modernization from your IBM i Enterprise

    SB Profound PPL 5491Modern, web-based applications can make your Enterprise more efficient, connected and engaged. This session will demonstrate how the Profound UI framework is the best and most native way to convert your existing RPG applications and develop new modern applications for your business. Additionally, you will learn how you can address modernization across your Enterprise, including databases and legacy source code, with Profound Logic.

  • Node Webinar Series Pt. 1: The World of Node.js on IBM i

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

    Part 1 will teach you what Node.js is, why it's a great option for IBM i shops, and how to take advantage of the ecosystem surrounding Node.

    In addition to background information, our Director of Product Development Scott Klement will demonstrate applications that take advantage of the Node Package Manager (npm).

  • 5 New and Unique Ways to Use the IBM i Audit Journal

    SB HelpSystems ROBOT GenericYou must be asking yourself: am I doing everything I can to protect my organization’s data? Tune in as our panel of IBM i high availability experts discuss:


    - Why companies don’t test role swaps when they know they should
    - Whether high availability in the cloud makes sense for IBM i users
    - Why some organizations don’t have high availability yet
    - How to get high availability up and running at your organization
    - High availability considerations for today’s security concerns

  • Profound.js 2.0: Extend the Power of Node to your IBM i Applications

    SB Profound WC 5541In this Webinar, we'll demonstrate how Profound.js 2.0 enables you to easily adopt Node.js in your business, and to take advantage of the many benefits of Node, including access to a much larger pool of developers for IBM i and access to countless reusable open source code packages on npm (Node Package Manager).
    You will see how Profound.js 2.0 allows you to:

    • Provide RPG-like capabilities for server-side JavaScript.
    • Easily create web and mobile application interfaces for Node on IBM i.
    • Let existing RPG programs call Node.js modules directly, and vice versa.
    • Automatically generate code for Node.js.
    • Automatically converts existing RPGLE code into clean, simplified Node.js code.

    Download and watch today!

     

  • Make Modern Apps You'll Love with Profound UI & Profound.js

    SB Profound WC 5541Whether you have green screens or a drab GUI, your outdated apps can benefit from modern source code, modern GUIs, and modern tools.
    Profound Logic's Alex Roytman and Liam Allan are here to show you how Free-format RPG and Node.js make it possible to deliver applications your whole business will love:

    • Transform legacy RPG code to modern free-format RPG and Node.js
    • Deliver truly modern application interfaces with Profound UI
    • Extend your RPG applications to include Web Services and NPM packages with Node.js

     

  • Accelerating Programmer Productivity with Sequel

    SB_HelpSystems_WC_Generic

    Most business intelligence tools are just that: tools, a means to an end but not an accelerator. Yours could even be slowing you down. But what if your BI tool didn't just give you a platform for query-writing but also improved programmer productivity?
    Watch the recorded webinar to see how Sequel:

    • Makes creating complex results simple
    • Eliminates barriers to data sources
    • Increases flexibility with data usage and distribution

    Accelerated productivity makes everyone happy, from programmer to business user.

  • Business Intelligence is Changing: Make Your Game Plan

    SB_HelpSystems_WC_GenericIt’s time to develop a strategy that will help you meet your informational challenges head-on. Watch the webinar to learn how to set your IT department up for business intelligence success. You’ll learn how the right data access tool will help you:

    • Access IBM i data faster
    • Deliver useful information to executives and business users
    • Empower users with secure data access

    Ready to make your game plan and finally keep up with your data access requests?

     

  • Controlling Insider Threats on IBM i

    SB_HelpSystems_WC_GenericLet’s face facts: servers don’t hack other servers. Despite the avalanche of regulations, news headlines remain chock full of stories about data breaches, all initiated by insiders or intruders masquerading as insiders.
    User profiles are often duplicated or restored and are rarely reviewed for the appropriateness of their current configuration. This increases the risk of the profile being able to access data without the intended authority or having privileges that should be reserved for administrators.
    Watch security expert Robin Tatam as he discusses a new approach for onboarding new users on IBM i and best-practices techniques for managing and monitoring activities after they sign on.

  • Don't Just Settle for Query/400...

    SB_HelpSystems_WC_GenericWhile introducing Sequel Data Access, we’ll address common frustrations with Query/400, discuss major data access, distribution trends, and more advanced query tools. Plus, you’ll learn how a tool like Sequel lightens IT’s load by:

    - Accessing real-time data, so you can make real-time decisions
    - Providing run-time prompts, so users can help themselves
    - Delivering instant results in Microsoft Excel and PDF, without the wait
    - Automating the query process with on-demand data, dashboards, and scheduled jobs

  • How to Manage Documents the Easy Way

    SB_HelpSystems_WC_GenericWhat happens when your company depends on an outdated document management strategy?
    Everything is harder.
    You don’t need to stick with status quo anymore.
    Watch the webinar to learn how to put effective document management into practice and:

    • Capture documents faster, instead of wasting everyone’s time
    • Manage documents easily, so you can always find them
    • Distribute documents automatically, and move on to the next task

     

  • Lessons Learned from the AS/400 Breach

    SB_PowerTech_WC_GenericGet actionable info to avoid becoming the next cyberattack victim.
    In “Data breach digest—Scenarios from the field,” Verizon documented an AS/400 security breach. Whether you call it AS/400, iSeries, or IBM i, you now have proof that the system has been breached.
    Watch IBM i security expert Robin Tatam give an insightful discussion of the issues surrounding this specific scenario.
    Robin will also draw on his extensive cybersecurity experience to discuss policies, processes, and configuration details that you can implement to help reduce the risk of your system being the next victim of an attack.

  • Overwhelmed by Operating Systems?

    SB_HelpSystems_WC_GenericIn this 30-minute recorded webinar, our experts demonstrate how you can:

    • Manage multiple platforms from a central location
    • View monitoring results in a single pane of glass on your desktop or mobile device
    • Take advantage of best practice, plug-and-play monitoring templates
    • Create rules to automate daily checks across your entire infrastructure
    • Receive notification if something is wrong or about to go wrong

    This presentation includes a live demo of Network Server Suite.

     

  • Real-Time Disk Monitoring with Robot Monitor

    SB_HelpSystems_WC_GenericYou need to know when IBM i disk space starts to disappear and where it has gone before system performance and productivity start to suffer. Our experts will show you how Robot Monitor can help you pinpoint exactly when your auxiliary storage starts to disappear and why, so you can start taking a proactive approach to disk monitoring and analysis. You’ll also get insight into:

    • The main sources of disk consumption
    • How to monitor temporary storage and QTEMP objects in real time
    • How to monitor objects and libraries in real time and near-real time
    • How to track long-term disk trends

     

     

  • Stop Re-keying Data Between IBM I and Other Applications

    SB_HelpSystems_WC_GenericMany business still depend on RPG for their daily business processes and report generation.Wouldn’t it be nice if you could stop re-keying data between IBM i and other applications? Or if you could stop replicating data and start processing orders faster? Or what if you could automatically extract data from existing reports instead of re-keying? It’s all possible. Watch this webinar to learn about:

    • The data dilemma
    • 3 ways to stop re-keying data
    • Data automation in practice

    Plus, see how HelpSystems data automation software will help you stop re-keying data.

     

  • The Top Five RPG Open Access Myths....BUSTED!

    SB_Profound_WC_GenericWhen it comes to IBM Rational Open Access: RPG Edition, there are still many misconceptions - especially where application modernization is concerned!

    In this Webinar, we'll address some of the biggest myths about RPG Open Access, including:

    • Modernizing with RPG OA requires significant changes to the source code
    • The RPG language is outdated and impractical for modernizing applications
    • Modernizing with RPG OA is the equivalent to "screen scraping"

     

  • Time to Remove the Paper from Your Desk and Become More Efficient

    SB_HelpSystems_WC_GenericToo much paper is wasted. Attempts to locate documents in endless filing cabinets.And distributing documents is expensive and takes up far too much time.
    These are just three common reasons why it might be time for your company to implement a paperless document management system.
    Watch the webinar to learn more and discover how easy it can be to:

    • Capture
    • Manage
    • And distribute documents digitally

     

  • IBM i: It’s Not Just AS/400

    SB_HelpSystems_WC_Generic

    IBM’s Steve Will talks AS/400, POWER9, cognitive systems, and everything in between

    Are there still companies that use AS400? Of course!

    IBM i was built on the same foundation.
    Watch this recorded webinar with IBM i Chief Architect Steve Will and IBM Power Champion Tom Huntington to gain a unique perspective on the direction of this platform, including:

    • IBM i development strategies in progress at IBM
    • Ways that Watson will shake hands with IBM i
    • Key takeaways from the AS/400 days

     

  • Ask the RDi Experts

    SB_HelpSystems_WC_GenericWatch this recording where Jim Buck, Susan Gantner, and Charlie Guarino answered your questions, including:

    • What are the “hidden gems” in RDi that can make me more productive?
    • What makes RDi Debug better than the STRDBG green screen debugger?
    • How can RDi help me find out if I’ve tested all lines of a program?
    • What’s the best way to transition from PDM to RDi?
    • How do I convince my long-term developers to use RDi?

    This is a unique, online opportunity to hear how you can get more out of RDi.

     

  • Node.js on IBM i Webinar Series Pt. 2: Setting Up Your Development Tools

    Profound Logic Software, Inc.Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. In Part 2, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Attend this webinar to learn:

    • Different tools to develop Node.js applications on IBM i
    • Debugging Node.js
    • The basics of Git and tools to help those new to it
    • Using NodeRun.com as a pre-built development environment

     

     

  • Inside the Integrated File System (IFS)

    SB_HelpSystems_WC_GenericDuring this webinar, you’ll learn basic tips, helpful tools, and integrated file system commands—including WRKLNK—for managing your IFS directories and Access Client Solutions (ACS). We’ll answer your most pressing IFS questions, including:

    • What is stored inside my IFS directories?
    • How do I monitor the IFS?
    • How do I replicate the IFS or back it up?
    • How do I secure the IFS?

    Understanding what the integrated file system is and how to work with it must be a critical part of your systems management plans for IBM i.

     

  • Expert Tips for IBM i Security: Beyond the Basics

    SB PowerTech WC GenericIn this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

    Don't miss this chance to take your knowledge of IBM i security beyond the basics.

     

     

  • 5 IBM i Security Quick Wins

    SB PowerTech WC GenericIn today’s threat landscape, upper management is laser-focused on cybersecurity. You need to make progress in securing your systems—and make it fast.
    There’s no shortage of actions you could take, but what tactics will actually deliver the results you need? And how can you find a security strategy that fits your budget and time constraints?
    Join top IBM i security expert Robin Tatam as he outlines the five fastest and most impactful changes you can make to strengthen IBM i security this year.
    Your system didn’t become unsecure overnight and you won’t be able to turn it around overnight either. But quick wins are possible with IBM i security, and Robin Tatam will show you how to achieve them.

  • How to Meet the Newest Encryption Requirements on IBM i

    SB PowerTech WC GenericA growing number of compliance mandates require sensitive data to be encrypted. But what kind of encryption solution will satisfy an auditor and how can you implement encryption on IBM i? Watch this on-demand webinar to find out how to meet today’s most common encryption requirements on IBM i. You’ll also learn:

    • Why disk encryption isn’t enough
    • What sets strong encryption apart from other solutions
    • Important considerations before implementing encryption

     

     

  • Security Bulletin: Malware Infection Discovered on IBM i Server!

    SB PowerTech WC GenericMalicious programs can bring entire businesses to their knees—and IBM i shops are not immune. It’s critical to grasp the true impact malware can have on IBM i and the network that connects to it. Attend this webinar to gain a thorough understanding of the relationships between:

    • Viruses, native objects, and the integrated file system (IFS)
    • Power Systems and Windows-based viruses and malware
    • PC-based anti-virus scanning versus native IBM i scanning

    There are a number of ways you can minimize your exposure to viruses. IBM i security expert Sandi Moore explains the facts, including how to ensure you're fully protected and compliant with regulations such as PCI.

     

     

  • Fight Cyber Threats with IBM i Encryption

    SB PowerTech WC GenericCyber attacks often target mission-critical servers, and those attack strategies are constantly changing. To stay on top of these threats, your cybersecurity strategies must evolve, too. In this session, IBM i security expert Robin Tatam provides a quick recap of IBM i security basics and guides you through some advanced cybersecurity techniques that can help you take data protection to the next level. Robin will cover:

    • Reducing the risk posed by special authorities
    • Establishing object-level security
    • Overseeing user actions and data access

     

     

     

  • 10 Practical IBM i Security Tips for Surviving Covid-19 and Working From Home

    SB PowerTech WC GenericNow that many organizations have moved to a work from home model, security concerns have risen.

    During this session Carol Woodbury will discuss the issues that the world is currently seeing such as increased malware attacks and then provide practical actions you can take to both monitor and protect your IBM i during this challenging time.

     

  • How to Transfer IBM i Data to Microsoft Excel

    SB_HelpSystems_WC_Generic3 easy ways to get IBM i data into Excel every time
    There’s an easy, more reliable way to import your IBM i data to Excel? It’s called Sequel. During this webinar, our data access experts demonstrate how you can simplify the process of getting data from multiple sources—including Db2 for i—into Excel. Watch to learn how to:

    • Download your IBM i data to Excel in a single step
    • Deliver data to business users in Excel via email or a scheduled job
    • Access IBM i data directly using the Excel add-in in Sequel

    Make 2020 the year you finally see your data clearly, quickly, and securely. Start by giving business users the ability to access crucial business data from IBM i the way they want it—in Microsoft Excel.

     

     

  • HA Alternatives: MIMIX Is Not Your Only Option on IBM i

    SB_HelpSystems_WC_GenericIn this recorded webinar, our experts introduce you to the new HA transition technology available with our Robot HA software. You’ll learn how to:

    • Transition your rules from MIMIX (if you’re happy with them)
    • Simplify your day-to-day activities around high availability
    • Gain back time in your work week
    • Make your CEO happy about reducing IT costs

    Don’t stick with a legacy high availability solution that makes you uncomfortable when transitioning to something better can be simple, safe, and cost-effective.

     

     

  • Comply in 5! Well, actually UNDER 5 minutes!!

    SB CYBRA PPL 5382

    TRY the one package that solves all your document design and printing challenges on all your platforms.

    Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product.

    Request your trial now!

  • Backup and Recovery on IBM i: Your Strategy for the Unexpected

    SB HelpSystems SC 5413Robot automates the routine tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:
    - Simplified backup procedures
    - Easy data encryption
    - Save media management
    - Guided restoration
    - Seamless product integration
    Make sure your data survives when catastrophe hits. Try the Robot Backup and Recovery Solution FREE for 30 days.

  • Manage IBM i Messages by Exception with Robot

    SB HelpSystems SC 5413Managing messages on your IBM i can be more than a full-time job if you have to do it manually. How can you be sure you won’t miss important system events?
    Automate your message center with the Robot Message Management Solution. Key features include:
    - Automated message management
    - Tailored notifications and automatic escalation
    - System-wide control of your IBM i partitions
    - Two-way system notifications from your mobile device
    - Seamless product integration
    Try the Robot Message Management Solution FREE for 30 days.

  • Easiest Way to Save Money? Stop Printing IBM i Reports

    SB HelpSystems SC 5413Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing.
    Manage your reports with the Robot Report Management Solution. Key features include:

    - Automated report distribution
    - View online without delay
    - Browser interface to make notes
    - Custom retention capabilities
    - Seamless product integration
    Rerun another report? Never again. Try the Robot Report Management Solution FREE for 30 days.

  • Hassle-Free IBM i Operations around the Clock

    SB HelpSystems SC 5413For over 30 years, Robot has been a leader in systems management for IBM i.
    Manage your job schedule with the Robot Job Scheduling Solution. Key features include:
    - Automated batch, interactive, and cross-platform scheduling
    - Event-driven dependency processing
    - Centralized monitoring and reporting
    - Audit log and ready-to-use reports
    - Seamless product integration
    Scale your software, not your staff. Try the Robot Job Scheduling Solution FREE for 30 days.

  • ACO MONITOR Manages your IBM i 24/7 and Notifies You When Your IBM i Needs Assistance!

    SB DDL Systems 5429More than a paging system - ACO MONITOR is a complete systems management solution for your Power Systems running IBM i. ACO MONITOR manages your Power System 24/7, uses advanced technology (like two-way messaging) to notify on-duty support personnel, and responds to complex problems before they reach critical status.

    ACO MONITOR is proven technology and is capable of processing thousands of mission-critical events daily. The software is pre-configured, easy to install, scalable, and greatly improves data center efficiency.