Are any other jobs (by other users) running against this file at the same time? Dave
Unconfigured Ad Widget
Collapse
Announcement
Collapse
No announcement yet.
Multiple jobs processing same file
Collapse
X
-
Multiple jobs processing same file
Do the jobs have identical submit and run priorities? Are your subsystems configured to run 20 jobs simultaneously? If the answers to these questions are yes, you may simply be saturating your disk subsystem with I/O. We would never try to run 20 batch update jobs here simultaneously; our small iSeries would hit 100% utilization after about 3-4 such jobs and further submits would be pointless. You are just slicing the same pie different ways then.
Comment
-
Multiple jobs processing same file
As suggested the performance difference could be caused overloading the systems resources such as disk (or potentially CPU). Other application areas could be: The file needs to be reorganised (i.e. the jobs processing 20 records have a large number of deleted records). Internal lock contention on the journal/Journal Receiver.I am presmin you have journalling enabled, hopefully with commitment control. Potential contention around records in another file To get a better understanding you may want to use Performance Tools to observe the behaviour. (e.g. Ensure the jobs have the file opened sequentially to ensure sequential prefetch is occuring).
Comment
-
Multiple jobs processing same file
The jobs are probably all running in the same susbsystem, using the same memory pool. Each memory pool supports a limited number of activations (active jobs/threads) It could be that there are not enough activations to support all the jobs. Some end up waiting for teh others to finish and free up their memory. (Just one of many possible explanations...) Use WRKSHRPOOL and WRKSBS to review the memory available to the subsystem that the jobs are running in.
Comment
-
Multiple jobs processing same file
To me, the overhead involved in having to create the member, delete the member (maybe) and perform the necessary overrides for using a multi member file make it less than appealing. It creates issues for virtually every method of data access you might choose, Query/400, RPG, and SQL. It increases the risk of a programmer messing up and accessing the wrong member and creating a potentially hard to find bug in the software. It can be useful for limiting the number of records being processed by a program. For particularly large files I might consider it, but only after doing some performance tests to verify that I can't achieve the same results with an index. Kevin
Comment
-
Multiple jobs processing same file
Hi I submit multiple jobs (say 20). All of them read from a workfile based on the range of RRN. (e.g. first job would process RRN 1-10000, second would process RRN 10001 to 20000 and so on). I found that some of the jobs process 2000 records while others process hardly 20 within same time although all of them were submitted together to same jobq. Can someone help me understand why it is happeninng and how to improve IO. I am Using RPGLE. There are no SQLs in the code. Thanks,
Comment
Comment