View Full Version : Most efficient approach to process files
02-17-2004, 08:53 AM
Personally, I would not use sql or opnqryf over such a large file. There is going to be a performance hit especially with opnqryf. I believe, but may be wrong, that sql will also build and access path. In these situations I've build a key over the transaction file and depending on the information passed used the full key to start or partial key (i.e setll using begin date or setll using begin date and first item). Then get the additional files when needed. If the program must access the Customer Master you may want to include the customer id in the key of the detail file. This way you can read the customer master only when the customer changes per date/item combination.
02-17-2004, 08:53 AM
There are at least five correct answers if not more to this question. Suffice it to say, that whatever you do, the performance will be improved greatly from a query. Dave
02-17-2004, 09:06 AM
1. Actually, the first thing to try is put some logicals over the file and see if you can speed up the existing SQL. Try both regular logicals and EVI's. Many times this can have dramatic effects. If a SETLL/READ will fix the problem, then a logical with that key can sometimes help the SQL to do it just as efficently. If you are familiar with EVI's, you can look at the WHERE clause and come up with the needed indexes. The nice thing is, you can leave the SQL as is. You don't need to reference the new logicals/indexes: The SQL engine will hunt them down and use them if they help it's performance. And you can do it live with no danger of interfering with other processes (exception is if a process deletes the file) If none of that helps, 2. I'd try a hybrid with an SQLRPG. Use all the same techniques above.
02-17-2004, 09:37 AM
> Personally, I would not use sql or opnqryf over such a large file. While all the answers are good, this one needs a clarification. If you don't have an access path which fits your criteria, query, SQL and OPNQRYF will all suffer about equally. Don't be too concerned about the size of the file. Many years ago on S/38 we had OPNQRYF running over a history file about that size, and it would return results in mere seconds. But that's because the OPNQRYF was looking for records based on the primary keys. Searching keyspace is incredibly efficient. The lesson here is that your query alone might be improvable simply by building an appropriate access path. I have not used Query/400 in a decade, but you may be able to get some information about the performance of the query if you do a STRDBG before running it. You don't need to actually debug anything; once the job is in debug mode the query optimiser should issue you some messages in the job log which should help you decide on some access paths. Certainly, OPNQRYF or SQL will want the same access paths for optimum performance. Good luck! --buck
02-18-2004, 07:25 AM
Gary Shipp wrote: > I have been tasked with converting a query that runs over 3 files > into a proper RPG, in an attempt to improve performance. Gary, I can't add much to what the others have said other than this: Read the section of the Query/400 manual regarding improving performance. One of the key items is the order that you declare the files in the files section. Always have the file with the least amount of records after the selection criteria be defined prior to the larger files. Bill
02-19-2004, 12:21 AM
I'm working late tonight, so I'm going to experiment with some new logicals for the transaction file this evening, and see what improvements I can make to the run. I have to change things out of hours, otherwise it causes to much disruption. Let you know how I get on.
03-08-2004, 09:54 PM
I have been tasked with converting a query that runs over 3 files into a proper RPG, in an attempt to improve performance. The first file is a transaction detail file that holds about 7 million records, each record is then cross reverenced to the item master and customer master files to provide other required details. The prompt that will be presented to the user will allow them to enter 3 criteria, these being Date range, item range and customer category. The first two bits of information will be found on each transaction detail record, the customer category must be obtained from the customer master. The delivery date is the only mandatory parameter. My question is, with program performance being the prime objective, what would be the most efficient way to process these 3 files? My initial intention was to use a logical over the detail file keyed by date range and item code, chaining to the customer master for the customer category. If an item range is entered, setll and setgt being used to limit the items picked up for a given day. However, would I be better to use a joined logical over the three files (all three exist in the same library) or even SQL or an OPNQRYF? Your comments are appreciated.
03-08-2004, 09:54 PM
With that many records I would use the matching record routine - Results in a single processing of all records - but files must be either indexed or sorted in a like order
Powered by vBulletin® Version 4.1.5 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.