20
Sat, Apr
5 New Articles

Change Your Batch-processing Environment

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Brief: The priority of batch work is not always uniform; some batch jobs should be run at a higher priority than others. A virtual maze of parameters is involved in setting up a high-priority batch environment. This article lets you skip over the maze and leads you to the prize of improved batch throughput.

Maybe it's just a coincidence, but I've noticed that if you want to get the attention of an information systems person, tell him the payroll job is running slowly. Typically a panicky person is going to stumble through the AS/400 configuration in a vain attempt to give that job every possible CPU cycle. ("Damn the torpedoes, run it at priority 1!")

You can take several steps to set up a high-priority batch environment so no one has to go through this panic every pay period. This article walks you through the work management values that affect batch job performance. You will see exactly what you need to do to set up a high-priority batch environment.

You can improve batch job performance in a number of different ways. 1 shows some of the options that affect throughput and discusses the impact of changes. To maximize the throughput of batch job processing, it is important to match the number of concurrent jobs to the available system resources (CPU and memory).

You can improve batch job performance in a number of different ways. Figure 1 shows some of the options that affect throughput and discusses the impact of changes. To maximize the throughput of batch job processing, it is important to match the number of concurrent jobs to the available system resources (CPU and memory).

For example, running too many concurrent jobs on a small system increases the number of CPU cycles the system uses to manage jobs. Under these circumstances, running a smaller number of jobs would achieve higher total system performance. CPU cycles are better utilized for productive work rather than for managing jobs.

Processing in a Batch Subsystem

OS/400 comes with a predefined environment or subsystem for batch-job processing called QBATCH. The subsystem job you see on the Work with Active Jobs (WRKACTJOB) panel is a monitor program that checks the sources of work (job queues) and executes jobs based upon job queue and subsystem parameters.

A subsystem has a MAXJOBS attribute that specifies how many concurrent jobs the subsystem will attempt to initiate. The default value for the MAXJOBS parameter of QBATCH is *NOMAX.

Batch jobs are submitted to a subsystem through job queues. A subsystem that runs batch jobs must have one or more job queues allocated to it. The IBM- supplied QBATCH subsystem has three job queues (QBATCH, QS36EVOKE, and QTXTSRCH).

Each job queue attached to a subsystem has a sequence number. You can use that sequence number to control the order batch jobs are run in the subsystem. For example, a subsystem will select a job from a job queue with a sequence number of 10 before it selects a job from a job queue with a sequence number of 20.

At the job queue level, a couple of controls dictate how many jobs can be active concurrently. The first parameter is called MAXACT, which limits the total number of jobs that can be active from the job queue.

You can also limit the number of active jobs using the job priority on the job queue. For each job priority (1 through 9), the parameters MAXPTY1 through MAXPTY9 control that limit. (The IBM default job priority is 5.)

The subsystem looks at the first job queue based upon the job queue's sequence number as it is defined to the subsystem (2 shows the job queues attached to a subsystem). A subsystem with no active jobs accepts the first job on the queue and begins execution. The subsystem monitor looks at the MAXACT parameter for the job queue, which specifies how many jobs may be active concurrently from this job queue. QBATCH's MAXACT parameter contains a value of 1, making QBATCH a single-thread job queue.

The subsystem looks at the first job queue based upon the job queue's sequence number as it is defined to the subsystem (Figure 2 shows the job queues attached to a subsystem). A subsystem with no active jobs accepts the first job on the queue and begins execution. The subsystem monitor looks at the MAXACT parameter for the job queue, which specifies how many jobs may be active concurrently from this job queue. QBATCH's MAXACT parameter contains a value of 1, making QBATCH a single-thread job queue.

To grasp the single-thread concept, consider a series of batch jobs that must run sequentially-never concurrently. These jobs require a single-thread job queue. In the case of the IBM-supplied QBATCH subsystem and its three associated job queues, the subsystem monitor program initiates one job from job queue QBATCH and then checks the second job queue (QS36EVOKE) allocated to the subsystem. Any jobs in that job queue are initiated since the MAXACT parameter of QS36EVOKE is *NOMAX. After initiating all jobs on the QS36EVOKE job queue, the monitor then checks the QTXTSRCH job queue for jobs to initiate. Because the MAXACT parameter for that job queue is also *NOMAX, all jobs on the queue are initiated.

Now that you have a basic understanding of how batch jobs are initiated, let's examine some practical implementations.

Changing the Job Queue Priority

Suppose you're concerned about high-priority batch work on a small system with limited CPU cycles and memory. The simplest solution involves changing the job priority parameter on the Submit Job (SBMJOB) command. You can assign a job higher priority than other batch work by specifying a lower value in the JOBPTY parameter. For instance, a value of 3 would supersede jobs with the default batch job priority of 5. This approach works fine for nonrecurring jobs.

If your high-priority job occurs regularly, or if you cannot change the job queue priority without changing a packaged application's programs, you could use an alternate method. Set up another single-thread job queue in a batch subsystem (probably QBASE, in a system with limited memory and CPU cycles). Let's explore the details of this second method.

First, use the Create Job Queue (CRTJOBQ) command to set up another job queue.

 CRTJOBQ JOBQ(QUSRSYS/HIBATCH) + OPRCTL(*YES) AUTCHK(*OWNER) + AUT(*USE) 

Running this command creates a new job queue called HIBATCH in the library QUSRSYS (or any other library you might be using for your system overrides and changes). The *YES attribute for OPRCTL means that a user whose user profile specifies SPCAUT(*JOBCTL) can manage jobs on this job queue. Without special authority, no one except the owner can control jobs on the job queue HIBATCH. The *PUBLIC authority to HIBATCH is *USE.

Next, you add the new job queue entry to your batch subsystem. A job queue entry is a term I'll use throughout this discussion and refers to a job queue attached to a subsystem. If you have a separate batch subsystem, use the End Subsystem (ENDSBS) command to end QBATCH. However, if all the job queue entries are in your controlling subsystem (probably QBASE), you must take other steps.

First, change the QCTLSBSD system value to another controlling subsystem. You can use QSYSSBSD, an IBM-defined subsystem in library QSYS. Next, you must specify RESTART(*YES) on the Power Down System (PWRDWNSYS) command. Only after you bring up the backup subsystem can you make changes to the normal controlling subsystem.

Now that the normal batch subsystem is ended, you can add the job queue entry to it with the Add Job Queue Entry (ADDJOBQE) command.

 ADDJOBQE SBSD(QBASE) + JOBQ(library/HIBATCH) + MAXACT(1) + SEQNBR(5) 

In this ADDJOBQE command, if you are using a different batch subsystem, replace QBASE with your subsystem name (e.g., QBATCH). The value of 1 for MAXACT is appropriate for a small system or any job queue that must execute jobs sequentially. Using a sequence number of 5 ensures that the subsystem will look at this job queue first when starting a new job.

On a larger system, you may want the subsystem to run more jobs concurrently. To do this, increase the value of the MAXACT parameter on the Change Job Queue Entry (CHGJOBQE) command. Add 1 or more to the MAXACT parameter to initiate more batch jobs concurrently. On small systems this change could over commit the memory and CPU system resources which can actually cause the batch jobs to run longer than if MAXACT had been left with its original value. Many of these changes can only be made while the subsystem is inactive.

To ensure that you've done everything correctly, select Job Queue Entries from the Display Subsystem Description (DSPSBSD) command. Your display should be similar to the one shown in 2.

To ensure that you've done everything correctly, select Job Queue Entries from the Display Subsystem Description (DSPSBSD) command. Your display should be similar to the one shown in Figure 2.

The next step requires you to make the batch subsystem active again. If have a separate batch subsystem, start the changed subsystem with STRSBS SBS(QBATCH). If your job queue entries were in the controlling subsystem, you must change the QCTLSBSD system value back to the normal controlling subsystem (QBASE in library QSYS) and perform an IPL to implement the change.

Using The HIBATCH Job Queue

Now that you have created this new job queue, how can you use it? Let's explore several ways of using this new facility.

The first and most obvious use involves submitting jobs to the new HIBATCH job queue. This method is the simplest way to utilize the high-priority job processing. This option is preferable when you are randomly submitting high- priority jobs interspersed between normal batch jobs.

If you have many high-priority jobs and the SBMJOB command defaults to the current job's job queue, use the Change Job (CHGJOB) command to change your interactive job's JOBQ parameter to HIBATCH. After making this change, when you submit a job that uses the current job defaults, the job uses the high-priority job queue.

If you are changing jobs that have already been submitted to a job queue, and you want to utilize the high-priority job queue, use the Work with Job Queues (WRKJOBQ) command. Jobs in other job queues can be moved to the HIBATCH job queue using option 2 (Change Job) and the parameter JOBQ(HIBATCH).

For users who always submit high-priority batch jobs, you should create a special job description that specifies the JOBQ as HIBATCH. Use the Create Duplicate Object (CRTDUPOBJ) command to duplicate the job description QDFTJOBD. Name the new job description HIBATCH and put it in QUSRSYS or your own system library. Then use the Change Job Description (CHGJOBD) command to specify HIBATCH as the job queue. Lastly, change the user profiles of selected users to use the job description HIBATCH. For those users, all jobs that accept the current job's parameters for submitting jobs to batch will use the HIBATCH job queue.

Setting Up a Separate Pool

You have just set up a high-priority batch job queue that will provide top-of- the-stack job execution for the jobs it contains, regardless of the other jobs that are already awaiting execution. If you have a system with enough memory, you should set aside a pool of memory for these high-priority batch jobs. Creating a separate memory pool gives you the ability to adjust the memory pool for the high-priority batch jobs to reduce their page faulting. The high- priority batch jobs can utilize their CPU cycles to perform application work with minimal system overhead. The net result is faster batch job execution and increased throughput.

What do you need to add to what you have already created? Here are the steps you would follow to set aside a separate pool of memory for high-priority batch jobs and to give them a higher execution priority than the normal, batch-job priority of 50. You can choose between two general paths: changing an existing subsystem or creating a new one.

If you would like to separate the management of the normal and high-priority batch work by merely starting and ending subsystems, you should create a second subsystem. With normal and batch jobs allocated to separate subsystems, the computer operations department can manage normal and high-priority batch job execution with ease. If you decide to create a new subsystem, you should change the start-up program to start the new subsystem.

First, let's address the method that requires changing an existing batch subsystem like QBATCH. While the subsystem is stopped, you need to change the subsystem description to add a memory pool. Assuming the configuration of the IBM-supplied QBATCH subsystem hasn't been changed from the default, you add one pool of memory using the Change Subsystem Description (CHGSBSD) command.

 CHGSBSD SBSD(QSYS/QBATCH) + POOLS((1 *BASE) (2 *SHRPOOLn)) + MAXJOBS(2) 

In this command, n is a number for a shared pool not currently being used.

Typically, you set the MAXJOBS parameter to a value of 2, 3, 4, or 5, based upon the amount of available memory and CPU cycles. This value limits the amount of machine resources given to batch processing within a subsystem.

The default class used by the subsystem QBATCH runs jobs at a priority of 50. The next step is to create a new class so that your jobs run at a priority of 45. The other parameters we want to change from the default are PURGE, which defaults to *YES, and TIMESLICE, which defaults to 2,000 milliseconds. Using the Create Class (CRTCLS) command, you can create your own HIBATCH class in your special system library or in QUSRSYS.

 CRTCLS CLS(QUSRSYS/HIBATCH) + RUNPTY(45) TIMESLICE(5000) + PURGE(*NO) + DFTWAIT(120) + CPUTIME(*NOMAX) + MAXTMPSTG(*NOMAX) + AUT(*CHANGE) + TEXT('High-priority (45) batch + class for xxxxxxx') 

In the example CRTCLS command above, you can put the class in a library other than QUSRSYS if you choose. The TIMESLICE parameter is in milliseconds. The DFTWAIT parameter indicates how long to give an individual instruction to complete. The CPUTIME parameter is a way of limiting how long a job can run. For example, you can limit a job to 1,800 CPU seconds by using the value of 180,000.

Lastly, you must tie the class and the subsystem together by adding a routing entry in the QBATCH subsystem you just changed. Before adding this routing entry, you need to review the current routing entries. Use the DSPSBSD command for subsystem QBATCH and select option 7 (routing entries). You should see values similar to those shown in the table in 3. Now, use the Add Routing Entry (ADDRTGE) command to add an entry for any unused sequence number. The sequence number that you select should be less than the sequence number for the *ANY entry, which in this case is 9999. In the example, I used sequence number 500.

Lastly, you must tie the class and the subsystem together by adding a routing entry in the QBATCH subsystem you just changed. Before adding this routing entry, you need to review the current routing entries. Use the DSPSBSD command for subsystem QBATCH and select option 7 (routing entries). You should see values similar to those shown in the table in Figure 3. Now, use the Add Routing Entry (ADDRTGE) command to add an entry for any unused sequence number. The sequence number that you select should be less than the sequence number for the *ANY entry, which in this case is 9999. In the example, I used sequence number 500.

 ADDRTGE SBSD(QSYS/QBATCH) + SEQNBR(500) CMPVAL(HIBATCH) + PGM(QSYS/QCMD) + CLS(QUSRSYS/HIBATCH) + MAXACT(*NOMAX) POOLID(2) 

In this ADDRTGE command, the CLS parameter refers to the class created in the previous step and the POOLID refers to the second pool defined in the subsystem description. After running this command, if you display the subsystem and look at the routing entries, you will see a table like the one shown in 4.

In this ADDRTGE command, the CLS parameter refers to the class created in the previous step and the POOLID refers to the second pool defined in the subsystem description. After running this command, if you display the subsystem and look at the routing entries, you will see a table like the one shown in Figure 4.

The next step is to change the HIBATCH job description you created earlier. For the RTGDTA parameter, you need to specify HIBATCH instead of QCMDB (the system default). This parameter is case-sensitive and must match the case entered in the CMPVAL parameter of the ADDRTGE statement.

 CHGJOBD JOBD(QUSRSYS/HIBATCH) + RTGDTA(HIBATCH) 

Next, you need to set up the initialization of the shared memory pool you added as a pool entry in the QBATCH subsystem. Use the Work with Shared Storage Pools (WRKSHRPOOL) command to adjust the values that correspond to the shared memory pool used in subsystem QBATCH. Change the defined size column to something between 500KB and 1000KB, based upon available memory. This change is not critical if you are using the auto-adjust option for system value QPFRADJ (value: 3). This value will automatically adjust based upon need. Change the value in the max active column to 1 to match the MAXACT parameter in the job queue entry that will use this shared pool. After your change, the WRKSHRPOOL display should look like 5.

Next, you need to set up the initialization of the shared memory pool you added as a pool entry in the QBATCH subsystem. Use the Work with Shared Storage Pools (WRKSHRPOOL) command to adjust the values that correspond to the shared memory pool used in subsystem QBATCH. Change the defined size column to something between 500KB and 1000KB, based upon available memory. This change is not critical if you are using the auto-adjust option for system value QPFRADJ (value: 3). This value will automatically adjust based upon need. Change the value in the max active column to 1 to match the MAXACT parameter in the job queue entry that will use this shared pool. After your change, the WRKSHRPOOL display should look like Figure 5.

For the final step, start the QBATCH subsystem and test out the new HIBATCH job-processing environment. Use any method that allows you to submit a batch job using your new HIBATCH job description. If you are using packages, the best way is to place a copy in a library above the package library in the library list.

You have now gone through the process of creating a processing environment for high-priority batch jobs. Remember to document what you have done in changing system objects and application package objects so changes can be reapplied when a new release is installed.

Summary of Batch Job Initiation

As you review the steps for processing high-priority jobs that follow, you'll begin to see how work management objects interoperate to process batch jobs.

1. The QBATCH subsystem is started. Because of the HIBATCH job queue entry, the job queue is allocated to the QBATCH subsystem (it looks to the job queue for work).

2. A job called PAYX is submitted to the HIBATCH job queue with the HI-BATCH job description and a command to CALL PAYA (request data).

3. The subsystem monitor for QBATCH looks at the job description of PAYX (HIBATCH) and gathers the parameters for the job, including the HIBATCH routing data.

4. The routing data is compared to the routing entries of the QBATCH subsystem. A match is found at sequence number 500 with compare value HIBATCH, so additional parameters are gathered for the job.

5. The HIBATCH class provides additional parameters for the job, including run priority and timeslice.

6. The pool parameter (2) of the routing entry means the job will use the second pool of memory described in the QBATCH subsystem or *SHRPOOL2.

7. The program parameter QSYS/QCMD (the command processor) looks at PAYX's request data, CALL PAYA, and begins execution of program PAYA in shared memory pool 2, with a run priority of 45.

After walking through this job-initiation process, you should have a clearer understanding of how the various work-management objects interact and how the environment for a job is established. You can make other changes or create customized job environments. Each system's environment is different, but with the tools of work management, you can customize the job environment to best meet your ever-changing needs.

Tom Henry is an independent consultant who retired from IBM after 30 years as a systems engineer specializing in midrange system environments. He can be reached at 510-934-3201.

Reference

Work Management Guide (SC41-8078, CD-ROM QBKA9J02).


Change Your Batch-processing Environment

Figure 1 Throughput Options and Their Impact

 Criteria Role in the Decision Process CPU Speed: A faster CPU can perform more work within a period of time (i.e., more tasks running concurrently). Memory: A large amount of memory allows you to separate work into more independent job environments for concurrent execution. Frequency of Job: If a job runs only once, you can use a simple method to manage the job. If a job runs every day, you should set up a more permanent environment that minimizes and simplifies the process for system operators or the person requesting the job. 
Change Your Batch-processing Environment

Figure 2 Job Queues Attached to a Subsystem

 UNABLE TO REPRODUCE GRAPHICS 
Change Your Batch-processing Environment

Figure 3 Default QBATCH Routing Entries

 SEQNBR CMPVAL CLASS PGM POOL ID 15 QIGC QGPL/QBATCH QSYS/QCMD 1 300 QS36EVOKE QGPL/QBATCH QSYS/QCMD 1 700 QCMD38 QGPL/QBATCH QGPL/QCL 1 9999 *ANY QGPL/QBATCH QSYS/QCMD 1 
Change Your Batch-processing Environment

Figure 4 Modified QBATCH Routing Entries

 SEQNBR CMPVAL CLASS PGM POOL ID 15 QIGC QGPL/QBATCH QSYS/QCMD 1 300 QS36EVOKE QGPL/QBATCH QSYS/QCMD 1 500 HIBATCH QUSRSYS/HIBATCH QSYS/QCMD 2 700 QCMD38 QGPL/QBATCH QGPL/QCL 1 9999 *ANY QGPL/QBATCH QSYS/QCMD 1 
Change Your Batch-processing Environment

Figure 5 The WRKSHRPOOL Display

 UNABLE TO REPRODUCE GRAPHICS 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: