View Full Version : Multiple jobs processing same file
06-02-2004, 10:50 AM
Are any other jobs (by other users) running against this file at the same time? Dave
06-07-2004, 11:59 AM
No, I am the only one submitting more than 20 jobs to process the file. Each job processes blocks of records based on RRN.
06-07-2004, 12:59 PM
Do the jobs have identical submit and run priorities? Are your subsystems configured to run 20 jobs simultaneously? If the answers to these questions are yes, you may simply be saturating your disk subsystem with I/O. We would never try to run 20 batch update jobs here simultaneously; our small iSeries would hit 100% utilization after about 3-4 such jobs and further submits would be pointless. You are just slicing the same pie different ways then.
06-07-2004, 02:09 PM
As suggested the performance difference could be caused overloading the systems resources such as disk (or potentially CPU). Other application areas could be: The file needs to be reorganised (i.e. the jobs processing 20 records have a large number of deleted records). Internal lock contention on the journal/Journal Receiver.I am presmin you have journalling enabled, hopefully with commitment control. Potential contention around records in another file To get a better understanding you may want to use Performance Tools to observe the behaviour. (e.g. Ensure the jobs have the file opened sequentially to ensure sequential prefetch is occuring).
07-02-2004, 06:47 AM
The jobs are probably all running in the same susbsystem, using the same memory pool. Each memory pool supports a limited number of activations (active jobs/threads) It could be that there are not enough activations to support all the jobs. Some end up waiting for teh others to finish and free up their memory. (Just one of many possible explanations...) Use WRKSHRPOOL and WRKSBS to review the memory available to the subsystem that the jobs are running in.
08-20-2004, 09:12 AM
Why not make your input workfile a multiple member file ? i.e. add the unique member for each run and pass it to the processing module. at completion whack the work member.
08-23-2004, 12:16 PM
To me, the overhead involved in having to create the member, delete the member (maybe) and perform the necessary overrides for using a multi member file make it less than appealing. It creates issues for virtually every method of data access you might choose, Query/400, RPG, and SQL. It increases the risk of a programmer messing up and accessing the wrong member and creating a potentially hard to find bug in the software. It can be useful for limiting the number of records being processed by a program. For particularly large files I might consider it, but only after doing some performance tests to verify that I can't achieve the same results with an index. Kevin
08-30-2004, 03:48 AM
Adding and removing members takes little if any overhead and are used quite frequently for processing as mentioned above. Just another approach.
08-30-2004, 03:48 AM
Hi I submit multiple jobs (say 20). All of them read from a workfile based on the range of RRN. (e.g. first job would process RRN 1-10000, second would process RRN 10001 to 20000 and so on). I found that some of the jobs process 2000 records while others process hardly 20 within same time although all of them were submitted together to same jobq. Can someone help me understand why it is happeninng and how to improve IO. I am Using RPGLE. There are no SQLs in the code. Thanks,
Powered by vBulletin® Version 4.1.5 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.