Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

When to RGZPFM?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • When to RGZPFM?

    Personally, I just know that reorganization using the KEYFILE(*FILE) parameter is a very good thing. I don't care what my file statistics are, I just make sure that there is a periodic reorganization. Occasionally, I will eyeball certain files to see what shape they are in. Overall I know that if I reorganize on a regular basis, I won't have to worry about it too much. Dave

  • #2
    When to RGZPFM?

    In the past I have set up the month-end processing to do RGZPFM on several of the large and high usage files. Usually I would setup to do the RGZPFM after the backup. Also, if you have any sort of scheduled purging then this is also a good time to do the RGZPFMs. The percentage of deleted records may not be the best indicator of when a reorg should be run. Sure, squeezing out the deleted records is important from a disk space standpoint. But if my memory serves correctly, there are other performance benefits to reorganizing the high usage files on a regular basis. Can anyone confirm this? Jeff Olen Olen Business Consulting, Inc. email: jmo@olen-inc.com phone: 760.703.5149 web: www.olen-inc.com

    Comment


    • #3
      When to RGZPFM?

      RGZPFM will do much more than physically remove deleted records. Using the KEYFILE(*FILE) parameter will physically put the file in key sequence. In many cases this can shave hours off of run times. Dave

      Comment


      • #4
        When to RGZPFM?

        Have you considered re-using deleted records by changing the file to REUSEDLT(*YES)? Largely negates the need for regular RGZPFM. Having said that, if you have a hugely volatile file with occassional massive numbers of inserts, followed later by massive numbers of deletes, you can end up with large numbers of deleted records and a RGZPFM may be needed. Also, if your programs rely on adds always being at the end of the file, you probably don't want to re-use deleted. Sam

        Comment


        • #5
          When to RGZPFM?

          I created this program several years ago and still use this once per week. This program is placed on a job scheduler that runs on Sunday.
          Code

          Comment


          • #6
            When to RGZPFM?

            While we haven't fully automated our reorgs, we do reorganize files that we purge on a consistent basis. Just remember to detach your logical views before the physical file reorg and reattach them afterwards. This allows the indexes to build faster by using more QDBSRVR jobs, otherwise your indexes will build one at a time, which can take many hours over large files. We also take advantage of the REUSEDLT keyword on our physical files. Re using deleted record space can eliminate the need of re organizing your heavily used transaction files. You only have to remove any FIFO or LIFO record ordering that may be used in the logical access paths. I have never really come upon an application that required FIFO/LIFO ordering of my data for any duplicate key processing, so we build all our files to reuse deleted record space.

            Comment


            • #7
              When to RGZPFM?

              Researching REUSEDLT is high on my list. I would love to implement that. The system I support is LEGACY in every sense of the word and I'm just not sure at this point if there are any processes out there that would break if the new records were not added at the end. We have a regularly scheduled (at the end of each billing cycle) reorg process. It runs for several hours and many times it reorgs files that may have only a relative few number of deleted records. I guess I was just trying to see if there is some threshold past which a file is in need of a reorg. For example, one of our files currently has over 65 million records, but only 75000 deleted records. Do I really need to spend the time required to reorg this file at this time? Even if REUSEDLT is implemented, eventually a reorg may be needed. Is there an way for me to determine the optimum time? Maybe the answer to what I'm asking is "No". Thanks very much for the input.

              Comment


              • #8
                When to RGZPFM?

                We use an automated Reorg process that we "try" to run once a month after month end processing. Downtime for system maintenance is hard to come by. We reorganizie files with a Deleted records to Live records percentage of 10% or more. Our transaction files can get up to around 5 to 6 million records between purges (monthly) which can get us up to around 20 to 30% between reorgs.

                Comment


                • #9
                  When to RGZPFM?

                  I think that re-using deleted records in a large file will take too much over-head and make the update process very slow. By running a rgzpfm each week, I keep my files pretty clean. Of course, I do a RGZPFM when I do a purge. Files that get deleted records during the day are rgzfm each night in the Night Procedure.

                  Comment


                  • #10
                    When to RGZPFM?

                    Anyone have any ideas on how to estimate a RGZPFM? We have some really huge files (over 800 million records) where we plan on deleting about 30% of these records. To regain the space, we need to do a RGZPFM but am concerned on how long it will take because these files are widely/frequently used files. I know removing the logical file members will help but then I need to calculate how long to rebuild all the indexes. Of course model #, memory, CPU usage, available storage all factor in....I am just looking for a simple formula I can apply to a TEST file and use that as a "Guesstimate".

                    Comment


                    • #11
                      When to RGZPFM?

                      I believe V5R4 allows a RGZPFM while the file is in use, if that helps you out. Chris

                      Comment


                      • #12
                        When to RGZPFM?

                        Yes..it does..and we are on V5R4 but when I tried to test that, it told me that option was not valid for the file I was trying to reorg..so I could not get it to work...so thats another issue/thread I could start.

                        Comment


                        • #13
                          When to RGZPFM?

                          Rather than worry about how long it will take to do the RGZPFM, one should take into consideration the amount of extra time it will take not to do it. Dave

                          Comment


                          • #14
                            When to RGZPFM?

                            Point well taken..but having these files not available for use during production time (which includes weekends) is not an option. We do REUSE deleted records and over time, these deleted records will go away...but was hoping to get storage back.

                            Comment


                            • #15
                              When to RGZPFM?

                              The file needs to be journaled if you want to RGZPFM without having an exclusive lock on the file. From what I understand, it moves undeleted records to the beginning of the file filling in the deleted records, essentially using an add/delete/commit loop, hence the need for commitment control to ensure data integrity. Eventually all the deleted records are at the end of the file and it needs exclusive control for a few seconds while it truncates those deleted records. It doesn't sound like it is particularly efficient, but you can run it for some hours each day at low priority in a gradual process. I think it would be best to set the file to REUSEDLT(*YES) first, so you are not creating more deleted records.

                              Comment

                              Working...
                              X