Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Reuse Deleted Records

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Reuse Deleted Records

    On Thursday, July 16, 1998, 10:56 PM, William Najarro wrote: Can someone help? I have a file that has the parameter to REUSE DELETED RECORDS turned on, but the file is still growing. The only way that I can reduce the file size is by ending the program that is accessing file and reorganize the file member. The function of the program writes a record to a member and another program will read and delete the record. Am I misunderstanding the function of this parameter or is there something I'm missing?
    Almost sounds like a good opportunity for a DATA QUEUE....
    ...... Bob Hamilton TEXAS BUSINESS SYSTEMS 736 Pinehurst Richardson, Texas 75080

  • #2
    Reuse Deleted Records

    You're not misunderstanding. The "reuse deleted records" parameter was designed for times when you must tie up a file for long periods of time. IBM says not to use it on files that can be reorganized regularly, e.g., once a day. So what you're doing is proper. If I were you, I'd spend my time trying to figure out why it's growing. That can only mean that the first program is adding records to the file faster than the second one is pulling them out, or that the first program is adding records that the second program is never reading. In other words, it sounds like an application design problem. On Thursday, July 16, 1998, 10:56 PM, William Najarro wrote: Can someone help? I have a file that has the parameter to REUSE DELETED RECORDS turned on, but the file is still growing. The only way that I can reduce the file size is by ending the program that is accessing file and reorganize the file member. The function of the program writes a record to a member and another program will read and delete the record. Am I misunderstanding the function of this parameter or is there something I'm missing?

    Comment


    • #3
      Reuse Deleted Records

      On Thursday, July 23, 1998, 06:09 AM, Ted Holt wrote: That can only mean that the first program is adding records to the file faster than the second one is pulling them out, or that the first program is adding records that the second program is never reading. In other words, it sounds like an application design problem. Ted/Sol We have the same problem as William. File is used as a "transaction file", when a transaction is sent to a remote system a record is added to the file, when the response is received the record is deleted. Following is an extract from DSPFD...I'll take any suggestions....the jobs accessing this file need to run 24x7...so rgzpfm is not an attractive option.

      Comment


      • #4
        Reuse Deleted Records

        On Thursday, July 23, 1998, 09:26 PM, Sol Solomon wrote: Please take note that REUSEDLT is not a one for one reuse ratio. There is another parameter for percentage of deleted records allowed on the file.
        I didn't think the DLTPCT parameter had any effect on REUSEDLT. I thought they were unconnected. DLTPCT just makes sure that you don't get too many deleted records in the file. It's a way of getting a warning when it's time to reorganise the file. Anyone using REUSDLT should be aware that the manual quite clearly says One hundred percent reuse of deleted record space is not guaranteed. A file full condition may be reached or the file may be extended even though deleted record space still exists in the file. (Taken from the DB2/400 Database Programming manual)

        Comment


        • #5
          Reuse Deleted Records

          On Friday, July 24, 1998, 05:11 PM, Sol Solomon wrote: To minimize downtime one procedure does the following, a duplicate object is created, records copied over, objects renamed process restarted over the copied file. Total downtime is the amount of time for the rename. In our transaction file the only records not deleted are the ones in error. I don't know if this is the same for you. Sol, Yes this is the same for us, the "transaction" file records are only those transactions that did not complete successfully. Appreciate the suggestion on the rename, we do a similar thing (except its via library redirection) at the end of the business day so our downtime is less than 30 seconds. However in your suggestion our (read my) feeling is the downtime is longer than a rename. For full integrity I need to prevent other processes from updating the file while the copy is being performed and until the rename is completed. The PTF is scheduled to be applied in 2 weeks. Will let you know what the results are. David

          Comment


          • #6
            On a slight tangent from the above be careful using the CHGPF command to change the REUSEDLT keyword to *YES during production time. Unlike most other CHGPF options which happen immediately once the file is free this change can allocate the file for some time whilst the file is "adjusted" to use the algorithm IBM applies to this change. A file we had with 30 odd millian records 6 million of those deleted took around 3 minutes to effect the change. The file was locked exclusively.

            Comment

            Working...
            X