View Full Version : Looking for a better way
01-01-1995, 02:00 AM
Our company has recently purchased a dual AS400. Using a vendor package, all transaction are mirrored to the second AS400. The package is using journals to perform this update. Bcause of this, the journals on the current box can not be dumped on a timely basis for our history files. The solution that was put in place to offset the history files is triggers. The trigger application is causing a major hit on the current AS400. The trigger programs are in RPG IV and using RETURN. This has helped some but not enough. Is there a better solution available to capture changes to our files? Can the journal receivers themselves be used? Would another language be required to reduce the performance impact? Any ideas would be greatly appreciated.
02-17-2000, 08:41 AM
Darlene, This response doesn't answer your question but in what activation group do the RPG IV triggers execute? *CALLER would be best for performance, if you can use it. If the program that caused the trigger to fire and the trigger program run in different AG's (IE: *DFTACTGRP and QILE respectively), I think (someone help me here) the QILE AG gets destroyed each time it returns control back to the other AG because it crosses an ILE control boundry. But definitely don't use *NEW as the AG for the trigger programs. Chris
02-17-2000, 09:35 AM
Thanks for the tip... I checked the main trigger program and it is using *CALLER. The other programs being called from the main trigger program are using *DFTACTGRP.
02-17-2000, 09:49 AM
It sounds like you are calling other programs from the trigger program. Are they(and can they) end with LR off. Typically you want the trigger program to do as little processing as possible to accomplish what's needed. If the amount of processing in the trigger program (and those it calls) is too much then you may be able to just have the trigger write the record info to a intermediate file and then have a batch program that just sits there and processes the records in the intermediate file. You could even set up the batch program to run every so many minutes if you wanted. Scott Mildenberger
02-17-2000, 10:23 AM
We tried setting up a batch program to process the transactions but it could not keep up. It was running several hours behind. I agree with you that the batch processing would help the performance issue. Our users are not willing to wait 4 - 6 hours for historical data. There are other programs being called from the main trigger program. All they do is write out records to a history file. Very simple process. The other programs that are being called use the RETURN and not LR. The main trigger program itself uses RETURN as well.
02-17-2000, 10:34 AM
It sounds like everything was working OK before you started mirroring. And it sounds like you were using the Journals for the historial files at that time. But now you do not have quick enough access to the Journals. If that is correct, Could you not have 2 JRNRCVs attached and use one for the mirroring and use the other just as you were doing before you started mirroring? Maybe I'm confused :-)
02-17-2000, 12:05 PM
If the program that caused the trigger to fire and the trigger program run in different AG's (IE: *DFTACTGRP and QILE respectively), I think (someone help me here) the QILE AG gets destroyed each time it returns control back to the other AG because it crosses an ILE control boundry. Chris, the behaviour you describe above (viz automatic destruction of program activation when program terminates) is true only of *NEW. The performance hit you take when calling a program in a different named activation group (eg dftactgrp calls QILE) is the initial one-off cost of establishing the new activation, and the loss of any efficiences that may have been had by sharing resources when calling a pgm with AG of *CALLER. (Of course if the program in QILE was already active, you'd get better performance than using *CALLER.)
02-17-2000, 02:59 PM
My suggestion: Your main trigger program should be doing one thing, and one thing only: in this case, writing entries to a data queue. You should create a constantly running program that monitors the data queue for new entries, then writes those records to your history files. Question: Are the history files being mirrored as well? MichaelD
02-18-2000, 06:54 AM
Yes, the files that are having records added from the programs called by the trigger are being mirrored to the backup box. Having this backup box is part of the 'big plan' to keep the company going incase the other box drops, needs maintenance, etc. I am curious as to why you ask. Is there something that I am missing? As far as a trigger program doing one thing and one thing only, I could not agree more. But like I said earlier in another response, we tried that method and the transactions could not be processed fast enough. There was a lag time of 4 to 6 hours. This gets me back to my original problem: is there another way to capture the changes?
02-18-2000, 09:23 AM
Hi Darlene, I was going to send in approximately the same answer as Michael, using some form of IPC to hand off to a monitor batch program, when I saw he had already replied. I really believe that is the answer, but apparently there is something missing in either our understanding or yours or somebody's. ;-) How does this take 4 to 6 hours and if so, why is there not the same lag in triggers? To put words in Michael's mouth, we're just saying do what you have to in the file with the trigger, post to an IPC mechanism for the rest and return. An already started batch program checks the IPC data and does it's processing while your own programs continue. That's very general and we don't know everything that's going on, but normally it would be the direction to go. Best, Joe Sam Joe Sam Shirah Autumn Software Consulting/Development/Outsourcing Please Note New Email: email@example.com Visit our User Group at: http://www.jax400.com
02-21-2000, 01:57 PM
It was my understanding from your earlier post that you tried using a program that sat dormant for a period of time (a few minutes, a half-hour, ?), then woke up to process records in a file. What I'm suggesting is a never-ending program which continuously processes entries from a data queue. If there are no entries, the program sits and waits for more, in stasis, waking immediately upon the addition of a new entry. The only scenario in which I can imaging a batch job that does nothing but write data to a file could take longer than an interactive program that waits for user input, does some arithmetic calculations, writes to a file which is mirrored, and fires a trigger which writes an entry to a data queue, is that you have so much interactive activity going on that the batch job never gets a chance at the processor. If this is the case: it's time for some serious performance tuning, or maybe a new AS/400. Are you letting your users run things like reports and queries interactively? If so, I would suggest converting these to batch immediately. If none of these answers is close to solving your situation, please provide considerably more information as to why the 4-6 hour delay occurs.
Powered by vBulletin® Version 4.1.5 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.