Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

TechTip: Preventing Record Lock, Part 2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • TechTip: Preventing Record Lock, Part 2

    ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
    ** This thread discusses the Content article: TechTip: Preventing Record Lock, Part 20

  • #2
    TechTip: Preventing Record Lock, Part 2

    ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
    Kevin, I have used the techniques discussed in Part I several times and find it works well, so I was really interested to see Part II. I understand the basics of what the example program is doing, but I don't understand how it's doing it. How does the "before update" ds R1Mast get loaded with the values from the chain, and how does SsMast get loaded with the screen values to compare the before and after? Thanks, Barnes

    Comment


    • #3
      TechTip: Preventing Record Lock, Part 2

      ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
      The articles on preventing record locks are great and we can use them in new development. Does anyone have any ideas on automating a record lock trap to prevent operator intervention? For example, a user starts an Interactive program that locks a particular record. A Batch job then attempts to use the same record - a record lock error occurs requiring intervention. We would like to add a trap in the Batch process to check for a record lock, identify the offending user, send a break message asking the user to exit their function, wait 30 seconds, and check for the record lock again. If the user fails to release the record after a specified number of retries, we want to end the offending user's interactive job. Any help would be appreciated!

      Comment


      • #4
        TechTip: Preventing Record Lock, Part 2

        ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
        Great article! Many authors discussing programming techniques would gladly give up lots of things if the outcome is an easier to maintain program (for the NEXT person who has to figure it out). Isn't field checking too risky for long-term pgm maintenance? If OHB and DESCRIPTION are the only fields tested, isnt it too easy for the next pgmr to add a SIZE field to the maintenance screen, and not know to add it to the comparison data structure? Looking at the bigger picture, and I understand that your examples were hypotehtical, but record formats should NOT be hundreds of fields long! They should be specific and concise with few fields. So checking the entire record ideally would be checking only a few fields. In the example, an entity which rarely changes (DESC) should not be in the same record format with a field which changes hourly (OHB). But alas we are stuck with what was done before us, so many record formats are gigantic. When presenting an article like field checking for record locks, maybe a list of caveats would be in order, for example: 1) If checking only certain fields for changes, be aware that you may miss some fields that should be checked. 2) Down the road, additional fields will most likely be added to the maintenance screen. Maybe the next pgmr will not know that the new field MUST be added to the DS. 2) If the pgmr forgets to add the fields to the DS, it would be a mind-boggling difficult debugging exercize, and users data would intermittently be lost. 3) Study the record structure: if some fields do not belong in this record for various reasons, consider moving them to a different record. 4) When designing a new file, take into account the types of fields and how often they will be changed, where they belong, etc. Consider record locking implications in the record design. 5) Some authors have promoted a "single fact record" concept. This would consist of a key and a single data field. Although computers are probably not there yet, it does have some interesting concepts for record locking. Some of those ideas may be relevant even for the larger record formats that we deal with. Thanks for the ideas!

        Comment


        • #5
          TechTip: Preventing Record Lock, Part 2

          ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
          One application I support has a different way of handling these situations. This system has a continuously running "Updater" process that handles all transations. No transaction is finalized until it goes through this Updater. The effect is that interactive components can read data and submit changes. This avoids exclusive record locks on the master tables and only creates exclusive locks on temporary transaction tables, which are never in contention (each transaction in the temporary tables are, by definition, by and for one client/session at that moment in time). The Updater forces a serialization of transactions so it never processes 2 at the same time. The Updater also looks for basic problems with the transaction and has the ability to reject a transaction due to these issues. The system design is such that a client may think they have performed an update, but the system has not accepted that update yet and may not accept that update at all. In reality though, because it's a global rule on this system, clients learn pretty quickly that it takes a few seconds for their changes to "stick."

          Comment


          • #6
            TechTip: Preventing Record Lock, Part 2

            ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
            Mark, You can trap for record locks using a File Information Data Structure (INFDS) and check for the lock status code (01218). Assign an INFSR (File Exception/Error Subroutine to the file to be watched. This must be coded to trap for the specific error. Be sure a catch-all is at the end for errors that you have not coded. This last function can execute DUMP, or any other process you want, and terminate. Within the INFSR you can proces the information which you get from the INFDS which needs the Device Feedback Information (device specific extension to the base feedback area). I believe the USERID can be found in there. If not, programs of this type need to use a PSDS (Program Status Data Structure), and the exception information should be in the error section. It's always in the same place for a given error. Retrieve the user locking the record from there. Now you can process the relative job info. Obviously, this involves some serious coding and testing, but once you have it, you can create a standard procedure to process the information. I have done this years ago using RPG III and commands, so I know this can be done. It's just a matter of how bad you want to do it. My situation resolved a frequent problem for two interactive users. I just reported who was locking the file to the second interactive user on a display screen. They would pick up the phone and ask the other person to release the record. Many times operational procedures can solve these problems (scheduled batch updates which lock out interactive users). The situation dictates the solution. I really like the UPDATER concept mentioned in another reply, but for vendor-supplied code, scheduling may be your only answer without modifying vendor-supplied code. You might consider wrapping a batch process with a driver that handles notification and/or Job Locking of a file to be updated.

            Comment


            • #7
              TechTip: Preventing Record Lock, Part 2

              ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
              I agree with all your points, Joel. Individual field testing does create a maintenance situation. Department discipline and code review prior to implementation are the only ways to handle this if field checking is to be done in the programs. With the recent releases of OS/400, SQL can be employed to put triggers and other processes directly into the database files at the field level. By defining the intelligence into the database, integrity can be insured. This prevents programs from doing things that are not allowed, or can be defined with an update program which performs maintenance validation. This is relatively new to iSeries people but a real solution. IBM in its push for SQL does not make this available for DDS defined files. Unfortunatly, without rewrites of databases, most of us are stuck with making do with legacy tools.

              Comment


              • #8
                TechTip: Preventing Record Lock, Part 2

                ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
                Brian, If the application is performing dirty reads then what benefit does serialization offer ? In scenarios that come to my mind it would not be beneficial Scenario High Volumes of isolated/idependent transactions Result Serialization causes queuing and lengthens response times. Possibly leading to time-outs and retry overheads Scenario High Volumes of dependent transacionts Result High error rate forcing user retries. Dave

                Comment


                • #9
                  TechTip: Preventing Record Lock, Part 2

                  ** This thread discusses the article: TechTip: Preventing Record Lock, Part 2 **
                  I use a totally different approach, that I sort of picked up in my DataFlex days, where the standard ENTRY macro worked this way. One drawback: it eats indicators, and in a large program you may just run out... On all modifyable fields, I use the CHANGE and MDT display attributes with the same indicator, thus 20 fields will cost you up to 20 indicators, depending on the number of key fields and whether or not you want to allow changes to the key. Here are the steps: MOVEA '0000000000' *IN(xx) to clear the indicators (may need another MOVEA or a named constant if you've got more than 10 modifyable fields) CHAIN(N) to read the record and fill the display fields. EXFMT to allow the user to modify data CHAIN to re-read and lock the record xx MOVE(L) display field to DB field UPDATE to update the record. This way, only fields that were modified will be written back to the database, fields that were changed by another user while this user was editing the data (not touching the other user's fields) remain unchanged. There are no record locks while the user gets some coffee, nor is there a need for the user to redo the update in case another user made some modifications as well. Hope to have explained this clear enough... Regards, René Hartman

                  Comment

                  Working...
                  X