Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

When to RGZPFM?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    Remember if you have an n way iSeries to ensure to use SMP if you have it installed by using CHGQRYA DEGREE(*MAX), this will allocate all possible resources to the RGZPFM process. Please beware, this will consume multiple processors and perform significant I/O so plan accordingly, it will definately speed up the RGZPFM process. Let me know how you get on ! Regards, TheiSpecialist.com

    Leave a comment:


  • joanzen
    replied
    When to RGZPFM?

    Just wondering if anyone has an automated process for reorganizing physical files? For example, reorg when the % of deleted records is ??% of the 'live' records. I'm interested in pursuing something like this, but I'm not sure what the correct/most efficient calculation would be. Thanks.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    Thanks Thull for the info..we do have TAATOOLS and sounds like your approach will work here.

    Leave a comment:


  • thull@fmic.com
    replied
    When to RGZPFM?

    Sloan also had a tool called RGZPF... The prompt is for library and % deleted records in the file. This is a great tool, as programers will continue to add files and forget to add them to the "hard coded" cleanup/purge programs. It also eliminates the "hard coding" of files that may not have many deleted records, but may have 60+ logicals, and the RGZ will lock the file for many hours. In the case of not having enough time, I took the "Jim Sloan" DLTDEPLGL utility and modified it. Now it will delete the dependant logicals, do my RGZ, then read the outfile again to (SBMJOB) re-create my logicals in a multi thread JOBQ. This finishes must faster, as RGZPFM rebuilds the access paths in a single thread fashion.

    Leave a comment:


  • mkrogstad@pdmsteel.com
    replied
    When to RGZPFM?

    We've incorporated a number of Jim's TAAtools into our production environment. His RGZLIB works very effectively, and has a number of useful parameters. We run it once a month in a scheduled cleanup job. Old guys rule!

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    The file needs to be journaled if you want to RGZPFM without having an exclusive lock on the file. From what I understand, it moves undeleted records to the beginning of the file filling in the deleted records, essentially using an add/delete/commit loop, hence the need for commitment control to ensure data integrity. Eventually all the deleted records are at the end of the file and it needs exclusive control for a few seconds while it truncates those deleted records. It doesn't sound like it is particularly efficient, but you can run it for some hours each day at low priority in a gradual process. I think it would be best to set the file to REUSEDLT(*YES) first, so you are not creating more deleted records.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    Point well taken..but having these files not available for use during production time (which includes weekends) is not an option. We do REUSE deleted records and over time, these deleted records will go away...but was hoping to get storage back.

    Leave a comment:


  • David Abramowitz
    replied
    When to RGZPFM?

    Rather than worry about how long it will take to do the RGZPFM, one should take into consideration the amount of extra time it will take not to do it. Dave

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    Yes..it does..and we are on V5R4 but when I tried to test that, it told me that option was not valid for the file I was trying to reorg..so I could not get it to work...so thats another issue/thread I could start.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    I believe V5R4 allows a RGZPFM while the file is in use, if that helps you out. Chris

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    Anyone have any ideas on how to estimate a RGZPFM? We have some really huge files (over 800 million records) where we plan on deleting about 30% of these records. To regain the space, we need to do a RGZPFM but am concerned on how long it will take because these files are widely/frequently used files. I know removing the logical file members will help but then I need to calculate how long to rebuild all the indexes. Of course model #, memory, CPU usage, available storage all factor in....I am just looking for a simple formula I can apply to a TEST file and use that as a "Guesstimate".

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    I think that re-using deleted records in a large file will take too much over-head and make the update process very slow. By running a rgzpfm each week, I keep my files pretty clean. Of course, I do a RGZPFM when I do a purge. Files that get deleted records during the day are rgzfm each night in the Night Procedure.

    Leave a comment:


  • Guest's Avatar
    Guest replied
    When to RGZPFM?

    We use an automated Reorg process that we "try" to run once a month after month end processing. Downtime for system maintenance is hard to come by. We reorganizie files with a Deleted records to Live records percentage of 10% or more. Our transaction files can get up to around 5 to 6 million records between purges (monthly) which can get us up to around 20 to 30% between reorgs.

    Leave a comment:


  • joanzen
    replied
    When to RGZPFM?

    Researching REUSEDLT is high on my list. I would love to implement that. The system I support is LEGACY in every sense of the word and I'm just not sure at this point if there are any processes out there that would break if the new records were not added at the end. We have a regularly scheduled (at the end of each billing cycle) reorg process. It runs for several hours and many times it reorgs files that may have only a relative few number of deleted records. I guess I was just trying to see if there is some threshold past which a file is in need of a reorg. For example, one of our files currently has over 65 million records, but only 75000 deleted records. Do I really need to spend the time required to reorg this file at this time? Even if REUSEDLT is implemented, eventually a reorg may be needed. Is there an way for me to determine the optimum time? Maybe the answer to what I'm asking is "No". Thanks very much for the input.

    Leave a comment:


  • wbois
    replied
    When to RGZPFM?

    While we haven't fully automated our reorgs, we do reorganize files that we purge on a consistent basis. Just remember to detach your logical views before the physical file reorg and reattach them afterwards. This allows the indexes to build faster by using more QDBSRVR jobs, otherwise your indexes will build one at a time, which can take many hours over large files. We also take advantage of the REUSEDLT keyword on our physical files. Re using deleted record space can eliminate the need of re organizing your heavily used transaction files. You only have to remove any FIFO or LIFO record ordering that may be used in the logical access paths. I have never really come upon an application that required FIFO/LIFO ordering of my data for any duplicate key processing, so we build all our files to reuse deleted record space.

    Leave a comment:

Working...
X