Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Optimization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Optimization

    If the Access plan is not saved within the program/package, it will be created the first time it is run after an IPL. When the Plan cache fills up, old plans are removed based on some rules, older less frequently used plans go first. Typically the cache with hold several thousand plans before filling up. See the SQL programming Manual for more info. The cache is emptied at IPL time. Kevin

  • #2
    Optimization

    Thanks. Not the answer I was hoping for, since we IPL daily.

    Comment


    • #3
      Optimization

      I am all for doing regular IPLs - But Daily is probably overkill - most people reccomend weekly. Is there a reason for the daily IPL?

      Comment


      • #4
        Optimization

        I think it goes back to the early days of the AS400 when IBM was saying that IPL'ing everyday was a good thing due to some system fragmentation issues. I have another question, I have looked at the SQL Document and have read about Access Plans, but have yet to figure out how to save one myself. Even if we were to IPL weekly my problem would still occur with a Friday job that must access 13,000,000 records or more. The last time it ran it took all the resources available and just about shut down the interactive sub-system. I am considering running it later, but I would rather save the access plan and have it run like it should when I want it to. I have run the SQL statement over the same file from STRQRY and it runs extremely well after the first time and even better the third. I want to save the access plan after the third time. Would running a PREPARE statement be better? I use them, but only for dynamic statements that are created on the fly by program selection criteria. I am open to any suggestions. One last thing, I remember the IBM DB team member talking about running an SQL statement over a terrabyte file and on the second time it ran in 5-10 minnutes versus an hour or more the first time. I want some of that. Bill

        Comment


        • #5
          Optimization

          STRSQL? Embedded SQL? SQL Script GUI? RUNSQLSTM? STRQMQRY?

          Comment


          • #6
            Optimization

            If you are accessing it the same way all the time, maybe you need to work on indexes to speed it up. Standard indexes to get the order easier (order by clause), and mix of regualr and EVI's for your selections (where clause). -dan

            Comment


            • #7
              Optimization

              It is an embedded SQL. This is the statement DECLARE drexpo cursor for select * from drexpo order by drloc, drdat We upload the file from our unix server with audited store data 1400+ stores. The file will grow as the month progresses. We used to have the target as a keyed file and it took almost 1 hour to create the index after the file was loaded. The program reads the file in store/date order and calls processing programs based on the transaction type supplied. Towards the end of the month the file will row to 14,000,000+ records. On a daily basis we get about 60,000 records that go thru the same process. Daily the records get loaded to DREXPOC weekly to DREXPOW. There is an OVRDBF that points the SQL statement to the proper file. On Friday we run 2 sets of daily data one is a cleanup of anything missed during the week and then we run day one (our week starts on Friday and ends on Thursday. I am not familiar with EVI? Thanks Bill

              Comment


              • #8
                Optimization

                EVI's. Encoded Vector Index. Another type of index that helps with the where clause. You don't have a where clause, so it won't help you. How long does it take the query to run, as opposed to the 1 hour it takes to create the index? Is that hour for the 60,000 records? That seems excessive. Maybe the hour you were refering to was the weekly upload? Was it indexed by drloc, drdat? How is the performance on your machine? How much free disk do you have? I have no definate ideas in mind, just asking random questions that might have some bearing. -dan

                Comment


                • #9
                  Optimization

                  Set the SQLRPG compile option DLYPRP(*YES) - Avoid redundant statement validation - be sure to test SQLCODE after file open. Be careful with the allocated space of VARCHAR columns. For performance benefits set the allocated space large enough to accommodate at least 90% of the data in that column. Code the SQL statement as a static statement (Hard coded) rather than loading a variable. This will allow the access plan to be saved within the program. These are all probably fairly minor enhancements. As Daniel pointed out the index is probably the greatest factor. Remember that having an index on a table will SLOW DOWN the inserts into that table, while POSSIBLY speeding up selects against it. So, where is the bottleneck? If I understood you correctly, there used to be an index on the table which was slowing down the upload, so you removed it. Creating and deleting the index as needed is an option, but will invalidate any access plan that references the index and require the plan to be recreated. Did any of this help? Kevin

                  Comment


                  • #10
                    Optimization

                    I tried the compile option on another program and it did seem to speed things up significantly. I hae a few options left. 1.Build a logical file and take the hit on loading it. The problem with this option is the way the file is loaded, until the indexing is finished it keeps the connection open with the source box. 2. Delay the processing job until after 5:00pm or 3. Try processing the file in arrival sequence. I appreciate all the help DASD utilization on our box is 61% of 316Gb. Total users at any one time are under 50.

                    Comment


                    • #11
                      Optimization

                      Try setting the Access Path Maintenance to rebuild (CHGLF yourlf MAINT(*REBLD)) - that should cause the index maintenance to be done when the index is read rather than every time a record is changed in teh physical file.

                      Comment


                      • #12
                        Optimization

                        That would work in a "normal" environment but this isn't one of them. The file gets overlayed every Friday and is never added to after that. Last Friday we loaded 18,880,402 records and today we will load about 24,000,000. It's about a 11Gb file. It will get processed one time and then it waits until next Friday when it happens again. I am going to eliminate the sequencing completely since the original reason for doing that is no longer valid. That will speed things up tremendously. I want to thank each of you for your time, suggestions and patience I have learned some things that have helped. Bill Barnes The Pantry, Inc.

                        Comment


                        • #13
                          Optimization

                          So, is the file being transferred over, then processed - or being processed directly, ala embedded sql with fetch? A different approach might help: transfer the file with ftp, then use cpyimpf (if necessary) to get it into a pf. Process the pf as IP without K in rpg. You'll be amazed at the speed you'll get when you left the system optimally cache a pf access. Another thought: If you have a reference file you're chaining to for descriptions, trying using SetObjAcc to force it into memory before the program starts. This is a great trick - no rewrite of the app required to get the same benefit as loading the file into a table.

                          Comment


                          • #14
                            Optimization

                            OVRDBF points the SQL? Is this a MULTI-MEMBER file? If it is or it isn't the Access Plan will probably be rebuilt every time or at least every change. DECLARE drexpo cursor for select * from drexpo order by drloc, drdat This Statement reads the Entire File Unless you are filtering it some other way. Try Adding a Dummy WHERE CLAUSE (the WHERE CLAUSE Chooses the Index) WHERE DRLOC>' ' and DRDAT > 0 The ORDER BY (Forces it) You MUST have an INDEX BY KEY DRLOC and DRDAT in that ORDER. (SQL FILE and SQL Index (64K page size) is BETTER CHOICE) both for SQL and RPG. DDS index (8K page size). DDS file (Deleted Records - Empty Holes in IO Blocks Read). BLOCKING DATA into your RPG Program: Also you may want to create a MODS in the RPG program and FETCH N Records into it. N is the Number of Records that the formula 128K/Record Length give you. PRTSQLINF on your RPG and take a look at the Report. On All Systems, it may vary from system to system. Access plans are rebuilt based on a variety of issues. ALWCPYDTA should be (*OPTIMIZE). NOTE: Some IT people Transfer Unix, Windows Files to IFS and then transfer the data to a Physical file. This is a waste of IO and time. Have them FTP straight into WORK Physical file. CPYFRMIMPF if you can use it will multi-thread the movement of the Data. Consider alternative approaches to solving this issue/problem than the current one. There is probably a better solution or architecture.

                            Comment


                            • #15
                              Optimization

                              I attended the SQL Optimization lecture at Common and remember the statement that the 3rd time an SQL statement is executed it will benefit from the optimizer. My question is does the optimizer "forget" after the next IPL? Thanks Bill

                              Comment

                              Working...
                              X