TechTalk: Dynamic Performance Tuning

System Administration
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

From: Eric Hill To: All

Is anyone on V2R1 using the dynamic performance tuning on the AS/400? This is accomplished changing system value QPFRADJ to a '2'. If so, does it help, hurt, stay about the same?

From: Tim Johnston To: Eric Hill

I used it for all of about one day. My problem is that when I send my backup procedure to batch, sometimes it won't run because of the performance adjustment. I use the job queue QPGMR, which I have attached to a separate pool *SHRPOOL2.

During the day, I also send my compiles to that same job queue. Depending on how much activity I have, the system automatically adjusts *SHRPOOL2. However, the nighttime save is a SAVCHGOBJ, and if there is not enough memory in the pool, the command will not run. That's a bummer. So, I choose to monitor WRKSYSSTS every so often. Yes, I know that that command takes resources, but I don't have performance tools. Unless I am missing something, that's all I have to monitor it.

Like Siskel and Ebert, I have to give the automatic performance adjustment value a "Thumbs Down."

From: Charles McLean To: Eric Hill

We tried that with shared pools from V1R3 and found that manual tuning was far better all the time. Some of the time it seemed that allowing the system to tune worked better, but as we got over 85 percent CPU, it fell apart.

From: Pete Hall To: Eric Hill

On the contrary, we are using QPFRADJ(2) very successfully. We have two small AS/400's. One has an active QBATCH subsystem, and always has a large enough pool to run save and restore functions, but I have had programs crash on the other one.

It really hasn't been a problem though. It would be nice if IBM would provide a MINSTG() parameter on the CHGSHRPOOL command. If QBATCH is being used most of the time, there will always be enough storage in it anyway. I think it is also possible to code MONMSG for the save command which would allow you to run CHGSHRPOOL and retry the save operation when the error occurs. Haven't really investigated that one though. Just a thought.

Larger and more active systems (I have used this approach on two client boxes at this point) don't seem to have any problem. I set up QBATCH to use *SHRPOOL1 as the second subsystem pool, and modify all routing entries to use pool number 2. I have been impressed with the improvement in response under varying system loads. You do have to allow the performance monitor a few minutes to make adjustments when things change, however.

From: Steve Roxberg To: Tim Johnston

I have been using the dynamic manager now for about four weeks and have been very happy. It's not great but it's better than the occassional tuning we have time to do. If it's a problem for your backup, manually adjust the memory pool size in the CL program before you start your backup job at night. I assume that it's automatically started? I like it. Just one man's opinion. Siskel and Ebert don't always agree. Thumbs up!