18
Thu, Apr
5 New Articles

Storage Pool Management of the AS/400

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The heart of the AS/400's work management system is in its storage pools. Sleek and elegant, they provide the memory management underpinnings for the AS/400's multitasking environment. Many people see storage pool management as an art, something to be mastered as a Shakespearean actor masters a role. Far from that lofty ideal, however, storage pool management consists of a set of easily understood concepts that even the Bard of Avon would appreciate for their simplicity and grace.

This article will introduce you to the concepts of storage pool management in OS/400. I'll discuss storage pools and activity levels and how they relate to the AS/400's work management scheme through subsystems. I'll give you all the basics you need to create your own OS/400 management scheme.

Neither a Borrower Nor a Lender Be

The main task of storage pools is to segment OS/400's working memory so that each subsystem can access its own specified piece of memory. By doing this, you can reduce resource contention among different subsystems. Storage pools allow you to provide dedicated resources to groups of jobs and to keep the resources separate. Storage pools allow you to better control the job flow on your AS/400 so that more work can get done.

To view your current storage pool setup, type in the Work with Shared Pools (WRKSHRPOOL) command. This command will bring up the display shown in 1.

To view your current storage pool setup, type in the Work with Shared Pools (WRKSHRPOOL) command. This command will bring up the display shown in Figure 1.

Your system's main memory can be divided into fourteen shared storage pools. Unlike private storage pools, these shared pools are assigned to different subsystems whenever you create or change a subsystem description. Each pool can be shared among many subsystems or dedicated solely to a particular subsystem for a special purpose. For example, you can assign a pool to handle nothing but SNADS processing in the QSNADS subsystem, or you can assign different batch subsystems, like QBATCH or QPGMR, to their own storage pools.

Four of these pools are predefined for subsystem use by OS/400. The *MACHINE pool is used for all operating system functions. The *BASE pool is generally used for batch and communications jobs. The *INTERACT pool is usually assigned to interactive processing and the *SPOOL pool is automatically assigned for printer spooling. The ten other shared pools, called *SHRPOOL1 through *SHRPOOL10, can be configured and assigned to different subsystems as needed.

Measure for Measure

Associated with each storage pool is an activity level. Storage pool activity levels specify the total number of jobs that can use memory from a storage pool at one time. When more jobs try to access storage pool memory than are allowed by the activity level, the excess jobs are temporarily swapped out of memory until another job relinquishes control of an activity level slot.

Storage pool activity levels are part of a larger AS/400 work management scheme. This scheme allows you to ensure that the throughput on your machine is consistent with your specifications. In addition to activity levels, work can be managed by the total number of active AS/400 jobs, maximum number of active jobs in each subsystem, maximum number of jobs originating from each job queue, and other system, subsystem, and job values.

Many different parameters control work flow on the AS/400, more than we can cover here. For the purposes of this article, it is sufficient to remember that storage pool activity levels are part of the overall work management scheme that ensures efficient processing. For a more complete description of how subsystems work, see "Solving the Mystery of Subsystems," MC, October 1994.

I Do Perceive Here a Divided Duty

Storage pool management consists of dividing memory into different storage pools and assigning storage pools to individual subsystems. Let's look at an example to see how this works.

The default storage pool for the QBATCH and QPGMR subsystems is the *BASE storage pool. In many environments, the operations staff uses QBATCH for such diverse tasks as product costing, invoicing, and printing pick lists while the programming staff uses QPGMR for compiles, conversions, and tests. A long- running memory intensive job, such as a Bill of Material regeneration, may slow down all other batch jobs. Similarly, if a programmer is running a twelve hour database conversion program that uses a lot of memory, daily production may suffer.

The solution is to create an additional storage pool with its own activity levels to service the programmers running in QPGMR. By doing this, production is separated from testing. The jobs will no longer compete with each other for memory, and batch processing will benefit from the split.

To Thine Own Self Be True

To implement our scheme, we need to create a separate storage pool for QPGMR. We must reallocate OS/400's main memory to divide it among the storage pools we want to use. As I mentioned before, there are ten storage pools available that can be assigned to any subsystem. These can be viewed and manipulated by entering the WRKSHRPOOL command. Suppose we want to assign *SHRPOOL1 to QPGMR. First, we have to move memory into that pool for it to use.

Let's say we want to give *SHRPOOL1 800K of memory and assign it an activity level of 1. This means that any job running in that pool will have a maximum of 800K to work with, but only one job can be active at a time.

Memory is always given to or taken away from the *BASE pool. All unallocated memory is automatically stored there and allocated memory is taken from there. To move memory into *SHRPOOL1, move your cursor down to *SHRPOOL1 on the Work with Shared Pools screen and type in 800 under Defined Size and 1 under Max Active. Press the Enter key and you'll find the display shown in 2.

Memory is always given to or taken away from the *BASE pool. All unallocated memory is automatically stored there and allocated memory is taken from there. To move memory into *SHRPOOL1, move your cursor down to *SHRPOOL1 on the Work with Shared Pools screen and type in 800 under Defined Size and 1 under Max Active. Press the Enter key and you'll find the display shown in Figure 2.

Notice that changing the memory size of the shared storage pools automatically reduces or increases the memory size of the *BASE storage pool. Increasing *SHRPOOL1 by 800K automatically decreases the *BASE pool by 800K. Conversely, if we change the memory size in *SHRPOOL1 back to 0, the memory size in the *BASE pool will automatically increase by 800K.

Therefore, when changing the memory size of the pools for a change in processing load, you give memory back to the *BASE pool before you give it to another pool.

What's in a Name!

Having memory in *SHRPOOL1 doesn't mean we can use it in QPGMR. We must first change the name of the default storage pool QPGMR uses from *BASE to *SHRPOOL1. Before we change any subsystem parameters, we end the subsystem to ensure that our changes take effect. We end the subsystem by typing in this command.

ENDSBS SBS(QPGMR) OPTION(*IMMED)

Once the subsystem is ended, we need to retrieve the storage pool operating parameters for QPGMR. To view these parameters, use the Display Subsystem Description (DSPSBSD) command as shown here.

DSPSBSD SBSD(QPGMR)

This command gives us all the parameters associated with subsystem QPGMR, as shown in 3. Type in option 2, Pool definitions, to display the storage pools assigned to the subsystem as shown in 4.

This command gives us all the parameters associated with subsystem QPGMR, as shown in Figure 3. Type in option 2, Pool definitions, to display the storage pools assigned to the subsystem as shown in Figure 4.

This screen shows us that the first Pool ID in QPGMR is assigned to the *BASE storage pool. Any job running in QPGMR that requests memory from Pool ID number 1 will get that memory from *BASE. To take the memory from *SHRPOOL1, use the Change Subsystem Description command (CHGSBSD) to change the storage pool location, as shown here.

 CHGSBSD SBSD(QPGMR) + POOLS((1 *SHRPOOL1)) 

You can use DSPSBSD to view the change. This screen shows that the first subsystem storage pool is now *SHRPOOL1, as shown in 5. Restart the subsystem by using the Start Subsystem (STRSBS) command and QPGMR will take its memory from *SHRPOOL1 instead of *BASE.

You can use DSPSBSD to view the change. This screen shows that the first subsystem storage pool is now *SHRPOOL1, as shown in Figure 5. Restart the subsystem by using the Start Subsystem (STRSBS) command and QPGMR will take its memory from *SHRPOOL1 instead of *BASE.

This technique can be used to change OS/400 storage pools for any subsystem. You can allocate and reallocate storage pool memory for any configuration you can think of. This freedom gives you flexibility in allocating resources to your jobs.

Double, Double Toil and Trouble

There are a few tricks to OS/400 memory management. However, these tricks have their own troubles.

Storage pool memory adjustment is a delicate process. Take away too much memory and subsystem jobs choke, endlessly thrashing data into and out of memory in a vain attempt to finish processing. Give a storage pool too much memory and jobs in that system perform well, but memory is wasted that could be put to better use elsewhere. Care must be taken in determining storage pool allocations, or your system performance will suffer.

The mathematics of storage pool memory allocation is a complex subject and IBM gives many good tips on it in its Work Management Guide and other publications. However, there is one tool that attempts to tune your system for optimum performance by executing memory management automatically. The Performance Adjustment (QPFRADJ) system value can be set to make automatic adjustments to your storage pool memory allocations. When this parameter is set to '2' or '3', OS/400 will examine your system as it is running and change the storage pools and activity levels automatically to reflect the workload in your system.

The problem with this technique is that OS/400 is always adjusting the system to process events that occurred a few minutes ago. Between the time it analyzes your performance and the time it changes your storage pool values, the profile of jobs running in your system may have changed.

If you have a large job that temporarily thrashes, OS/400 will adjust your system to give more memory to the thrashing job by taking memory from another storage pool, such as *INTERACT. OS/400 makes this adjustment regardless of whether the job is still running. If the job ends between analysis and change time, OS/400 still changes the storage pool memory size. The next time the OS/400 makes an adjustment, it corrects for this problem, but it may make a different adjustment that is no longer valid.

QPFRADJ is a good technique to use if you have a fairly steady workload that doesn't change a lot. However, if your environment is more dynamic-where the memory and activity levels may need to be changed at a moment's notice-it can be a dangerous technique to use because the automatic adjustments will always be behind the times.

Another memory management technique is to create a private storage pool for a subsystem instead of using one of the predefined shared pools. Private pools are assigned to a single subsystem, and they cannot be shared with other subsystems.

Private pools take their memory directly from the *BASE storage pool. They use this memory directly for their own processing, not sharing it with any other subsystems. There is little difference between this technique and permanently assigning memory to a shared storage pool except that private pools allocate memory only when their associated subsystem is active. When the subsystem is inactive, its memory returns to *BASE and can be used for other jobs running out of *BASE. In cases where a subsystem is inactive most of the day, a private pool makes more sense as it does not permanently lock memory away from *BASE the way a shared pool does.

Private pools can be valuable in situations where different subsystems become active and shut down at different times of the day.

Going back to our example, to create a private pool to service QPGMR instead of assigning it to a shared pool, we would use this command:

 CHGSBSD SBSD(QPGMR) + POOLS((1 800 1)) 

When you create a private pool in this manner, the new pool will show up on the Work with System Status (WRKSYSSTS) screen and *BASE's storage size will decrease accordingly. Your new private pool, however, will not show up on the WRKSHRPOOL screen. This is because the WRKSHRPOOL command only displays the 14 shared pools in your system. It does not show any private pools.

Parting Is Such Sweet Sorrow

We've barely scratched the surface of storage pool management. But with these basics-dividing memory and activity levels between storage pools, assigning storage pools to subsystems, creating private pools, and using the QPFRADJ subsystem value to automatically adjust your pools-you'll be well on your way to managing your own storage pools.

And if the result isn't really Shakespeare, at least you'll have a good memory management system.

Joe Hertvik is a freelance writer and a system administrator for a manufacturing company outside of Chicago.

REFERENCE

Work Management Guide (SC41-3306, CD-ROM QBKALG00).


Storage Pool Management of the AS/400

Figure 1 Viewing Storage Pool Allocation

 UNABLE TO REPORDUCE GRAPHICS 
Storage Pool Management of the AS/400

Figure 2 Storage Pools After Assigning 800K to *SHRPOOL1

 UNABLE TO REPORDUCE GRAPHICS 
Storage Pool Management of the AS/400

Figure 3 Subsystem Description Parameters

 UNABLE TO REPORDUCE GRAPHICS 
Storage Pool Management of the AS/400

Figure 4 QPGMR Set to Use *BASE

 UNABLE TO REPORDUCE GRAPHICS 
Storage Pool Management of the AS/400

Figure 5 QPGMR After Being Changed to *SHRPOOL1

 UNABLE TO REPORDUCE GRAPHICS 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: