20
Sat, Apr
5 New Articles

Automate System i Management and Optimization

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times
Many System i shops unduly defer critical optimization functions because the tasks are complex and time-consuming. Automation can help.

 

Back in the early days of what was then called AS/400, when people said they had a 10 gigabyte machine, they were referring to DASD. Today, they are referring to system memory. The point is that this is not your father's AS/400 (or iSeries or System i), and the changes that have occurred have serious implications for system management and optimization.

 

As System i evolved to offer significantly more memory and DASD, not to mention radically faster processors in multi-way configurations, business applications grew to take advantage of the expanded resources. System i has always been a platform for robust, straightforward, and totally reliable ERP systems, but now it is assigned a much more varied workload, with massively increased transaction volumes generated by users who may be located anywhere in the world and who expect instant response.

 

Not only has the number of business applications run on System i grown, but their complexity has mushroomed as well. System i must now accommodate multiple technologies and languages, varied access methods, and some methodologies that are not always sympathetic to the platform.

 

System i and its predecessors have, correctly, been sold as easy to implement, manage, and use. Because they bought System i to achieve lower total cost of ownership, many companies, particularly small and medium-sized firms, have a limited number of people managing System i. In fact, many small shops assign all of the System i technical responsibilities to one person. What's more, that person may also perform tasks that would be done by administrative staff in a larger company.

 

The upshot is that the skills required just to harvest the information needed to identify System i issues and problems, let alone to address them, are typically scarce. Consequently, System i monitoring, analysis, and optimization tasks may be perpetually shunted to the bottom of the priorities list. In the past, as DASD filled up and processors became overburdened, the only alternative to hiring more staff to perform optimization tasks was to spend more money to add DASD and more or faster processors.

 

DASD is now inexpensive, so, rather than cleaning it up, why not just buy more? The problem is that as the volume of obsolete data grows, the bloat causes other problems. Applications slow down as they wade through data that is no longer relevant. In addition, it takes much longer to backup and recover, say, 500 gigabytes than 250 gigabytes.

Maximizing the Benefits

The value of optimization and tighter management of System i is clear, but where should you begin? Start with the tasks that will deliver the utmost impact, with the least effort. This article examines the following five areas that typically provide the greatest benefits:

·        Compression

·        Physical file reorganization

·        QSYS and IFS object clean-up

·        Logical file optimization

·        Data, CPU, and I/O usage monitoring

Compression

Compressing objects such as programs that are no longer used or that are used infrequently can release around 60 percent of their uncompressed space. In addition to storage savings, there are also benefits to be derived during backup and restore operations. Most back-up routines leave the operating system to compress programs before writing them to tape. When recovering from a disaster, programs that were uncompressed before being backed up are decompressed automatically as they are loaded onto disk, but programs that were compressed to start with aren't decompressed. This may save only, say, half a second per program, but if you compressed just 3,600 programs, system recovery times will be reduced by half an hour. In companies where downtime costs can run to hundreds of thousands or even millions of dollars per hour, that half-hour represents significant value.

 

The example of 3,600 programs is likely an understatement. On average, companies have 35,000 to 55,000 programs on their System i machines, a large portion of which are rarely or never used. Hence, your storage and recovery time savings will likely be considerably higher.

 

The most difficult part of compressing programs is determining which ones are not being used and can, therefore, be safely compressed. An analysis tool, as discussed below, can help with that.

Physical File Reorganization

Deleted records are logically deleted, but they continue to occupy space until you reorganize the file. In addition, because "deleted" records are brought into the I/O buffers and then filtered out during read operations, they slow down applications. Furthermore, deleted records are copied onto backup tapes. They are then reloaded onto disk should you need to perform a restore. Thus, by regularly reorganizing files, you can reclaim considerable storage space, as well as improve the performance of business applications and backup and restore operations.

 

Many organizations don't perform reorganizations as often as they should because it takes a long time to reorganize the exceptionally large files that are typical today. In the past, all applications accessing the file had to be shut down until the reorganization process completed. That is no longer true. System i includes reorganize-while-active capabilities. Nonetheless, because such reorganizations strain resources and maintain lengthy record locks, they must be restricted to periods when the other demands on the system are light. Yet, as they often take longer than the available maintenance windows, many companies still defer file reorganizations longer than is prudent. Fortunately, third-party reorganization solutions can overcome these impediments.

QSYS and IFS Object Clean-Up

A typical System i machine stores more than half a million objects. Many of them are never used. Deleting obsolete objects, possibly archiving them first, frees up considerable DASD.

 

In addition, obsolete objects will be included in data backups and restores, slowing down those processes. And if you use a high availability (HA) product, the obsolete objects will be replicated to the backup server, consuming space there as well. Once you delete the obsolete objects on your primary system, the HA replication process will automatically delete them from the backup server.

 

Finding obsolete objects is a little more complex for IFS objects than for QSYS objects because the Last Usage date is immediately updated when you use Navigator to view object properties, making it appear as if the object was recently used and, therefore, not obsolete. Specialized routines included in advanced third-party optimization products can overcome this problem. These products display objects that are truly unused and provide a procedure to archive them.

Logical File Optimization

Because of the indexes used to create views, logical files can be exceptionally large, and their use can consume considerable CPU cycles. Logical file optimization is, therefore, critical, but the necessary tasks can be very complex. First, you need an advanced analysis tool to spot problem areas. Even after you've identified the issues, unless there is an adequate downtime window available, you may have to switch users to a backup system while you perform the optimization tasks. Because logical file optimization involves index key sharing, you must then kick-start the operating system to perform sharing, where possible. It also involves managing the maintenance of the access paths—*IMMED or *REBLD—so, when taking advantage of the great gains here, you must also put in place an active monitor on index usage to reverse any access path maintenance changes in the unlikely event that they are again required by users on a regular basis.

 

The value to be derived from logical file optimization can be very high. Nonetheless, as with all things, the greatest benefits sometimes come only with much work, but an optimization tool that automates this process can dramatically reduce the complexity and workload.

Data, CPU, and I/O Usage Monitoring

It is an almost immutable law that processor and storage use will increase over time due to at least three factors: increases in business volumes, increases in the variety of data retained, and increases in the number and complexity of the business functions that are automated. To ensure that the IT infrastructure is sufficient to handle this growth and provide adequate application performance, you must monitor these trends. When doing so, it is important that you scrutinize the trees, not just the forest. In other words, you must have tools that allow you to isolate potential bottlenecks and deal with them before they become critical.

 

Monitoring data growth is not as easy as it may seem. Doing so using operating system commands alone involves the use of the DSPOBJD *ALL *ALL and DSPFD *ALL *ALL commands. In addition, separate commands are required to provide visibility into the IFS which is, typically, one enormous file. Plus, you'll need a host of custom queries to monitor week-to-week object growth. Trying to do all of this without a good set of tools is daunting.

 

It is also important to keep an eye on program-specific CPU usage. Often, you'll find that some of the least important applications consume the greatest volume of CPU and memory. If you know about them, you may be able to schedule these non-critical jobs during times when CPU demands are lower or restrict them by, for example, pool size.

 

Even if your System i machine has no DASD constraints or CPU bottlenecks, performance of some applications can be curtailed by I/O bandwidth limits. It is, therefore, important to be able to determine which tasks are consuming the most I/O bandwidth and, if possible, adjust their scheduling to avoid bottlenecks.

Automation

System i optimization and management tools fall into one of three categories: monitors, analyzers, and optimizers, with the majority occupying the most basic of the three categories, monitors. The use of the word "basic" is not meant to imply that these products offer little value. On the contrary, monitors deliver value by providing useful information about what is going on inside your System i.

 

Obviously, monitors don't create data out of thin air. The data already exists, and you can access it using standard operating system commands, but doing so is a cumbersome process and requires knowledge that isn't widespread.

 

The gathering of this information may be the first step, but it is only by analyzing the data that you can begin to identify areas where optimization is possible and valuable. Again, trying to do that manually is difficult and time-consuming. Analyzer products can automate most of the analysis process, eliminating much of the labor component.

 

For example, an analysis tool can gather data on each job's CPU usage over a period, allowing you to examine and drill down into the data. In doing so, you might be able to spot jobs that are consuming inordinate resources. Often, addressing just one offending job can release an enormous volume of resources.

 

Another example of functionality that an analysis product can provide is to gather information about all objects, strip out, say, just the save files, sort them by size, and check the last usage dates to highlight ones that are likely no longer required. After looking at the data in this way, many companies find exceptionally large save files that can be removed with no impact on operations.

 

Effective analysis is important, but you will not begin to derive value until you apply the results of that analysis to optimize and better manage your System i. These optimization tasks can be at least as complex and time-consuming as the monitoring and analysis functions. Thus, products that automate optimization tasks can provide tremendous value.

 

Among other functions, a good optimizer will monitor objects, find large, obsolete ones, and then help you to remove them safely, possibly also automatically archiving them to near-line or offline storage so you can access the object again should the need arise. An advanced optimizer can also store your organization's archiving policies and then automate those policies where appropriate.

 

Logical files offer another opportunity for optimizers to provide significant value. When you consider that it typically takes two I/Os to update a physical file, it should come as no surprise that it often takes five I/Os to update a logical file. Because some physical files have five, 10, or even 20 logical files overlaid on them, the background I/O for each additional record is massive—two I/Os for the actual physical record and then, say, 50 I/Os to keep the logical files up-to-date. On first glance, one might consider those I/Os to be a price worth paying to keep the logical files up-to-date, but in most organizations, many logical files are not being used and, therefore, offer no value.

 

A good optimizing product will identify access paths that have not been opened in, for example, 180 days and change the maintenance parameter for them to *REBLD instead of *IMMED. This retains the logical file, but its access path is removed and will only be rebuilt if someone uses it. The result of this simple optimization is usually a massive reduction in the storage required to maintain logical files. At the same time, this also reduces the I/O activity required to maintain the optimized files. (Note: The 180-day threshold quoted here is only an example. Most organizations will still find a number of logical files that fall beyond the threshold even when it is set as high as two years.)

 

Next, the optimizer will look at the remaining *IMMED logical files and determine whether they can be shared with others. Sharing is an inherent i5/OS feature, but the operating system cannot, for the most part, detect instances when logical-file sharing is appropriate. An advanced optimization product can identify these instances and force the operating system to do some further tidying up to save more space and further reduce the required I/Os.

 

An often-realized benefit of this logical file optimization is that, again, backup and recovery tasks get a tremendous boost. During a recovery operation, waiting for access path rebuilds and repairs can consume considerable precious time as you work feverishly to bring the business back online. Because you there will be many fewer access paths, this one option alone might shave many hours off your current recovery times.

 

The foregoing is only a sampling of the optimization possibilities available in the toolkits available on the market today. Space limitations prohibit expanding on others here, but, briefly, they include, among others, automated archiving and purging of obsolete data and objects and low-impact, while-active file reorganizations that can be automatically divided into smaller subtasks that are scheduled to run during slow periods.

Choosing a Monitoring, Analysis, and Optimization Toolkit

Once you've determined that the types of tools described above can benefit your organization, what factors should you consider in evaluating the available products? Price will be an obvious consideration, but what matters most is ROI.

 

Comprehensiveness is an important determinant of ROI. This article had room to discuss only a few possible System i optimizations. The issues presented above are common, but there are many more. The optimization issue that is most salient in one organization may be unimportant in another. It is, therefore, important to have a tool, or a set of tools, that will report on the health of your whole System i, examining as many factors as possible.

 

The previous paragraph begs another question. Should you buy a single comprehensive tool, or should you assemble a kit of various tools that, in total, will accomplish the same thing? Assuming it provides all of the necessary functionality, a single tool will be more productive. Using a collection of tools requires learning each tool separately, and you must switch between tools when you use them. In contrast, a single comprehensive product will provide a common user interface that requires only one learning curve, with no need to move back and forth between different tools.

 

It is not enough to just know which optimizations need to be done. You have to actually perform those optimizations before you can begin to derive value. Some of those tasks can be very complex and time-consuming. To receive greatest value out of the optimization tools, choose ones that provide a high level of automation. After all, your organization likely adopted System i because of its ease of implementation, management, and use. Optimization should enhance, not negate, those System i benefits.

Andy Kowalski
Andy Kowalski is senior product manager with Vision Solutions Inc. He has a Bachelor's Degree in Computer Science and over 20 years experience in IBM midrange systems from System/38 to System I, specializing in data resiliency, availability, and systems and database management technologies. He has worked for customers, partners, and software vendors both in Europe and North America . He has a good technical and business knowledge of the System i market space and is an advisor, project manager, and solution architect on many implementation projects from SMB to enterprise. One of Andy's skills is his ability to explain complex technical topics in a practical and easy-to-understand way to any audience. Andy's role at Vision Solutions Inc. is to help define and implement product strategy for Vision's portfolio of resiliency, availability, systems optimization, and database management technologies. Contact Andy at This email address is being protected from spambots. You need JavaScript enabled to view it..

 

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: