Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Looking for Data Queue advice.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Looking for Data Queue advice.

    Kevin, IBM probably used their approach so that they could maintain a session level connection. If you use the all users model, you may want to consider swapping user IDs so you know who is doing what. Also, with a single server servicing multiple jobs, you have to be careful that the response for one user does not go to another, etc. This can work pretty well but you really have to be careful. I created a server frame that is provides a foundation for building servers. I did this at work so I can't share the code, but I could share pointers; who knows maybe some day I will create an open source version on my own time. Some design points I included were a variable key definition, and multiple thread capability. That way we could bump up the servers assigned to a particular task in response to performance issues. If you are interested, I could post some of the prototypes, which might help you collect your thoughts. David Morris

  • #2
    Looking for Data Queue advice.

    I would go with the never-ending job. The main reason for this is that it gives you the option to isolate all that can go wrong into one procedure. The question to ask is ‘What happens when something goes wrong?’ The user must not be aware of any problems or delays. The never-ending job would wait on a data queue, once a request is posted by a user it can then perform whatever tasks is required for that transaction. If an error occurs it can send an inquiry message and then re-post the entry to the data queue. The only logic in the user programs is posting to the data queue with the proper data for the batch procedure. We have such a procedure in place here. We have a purchase order system on one AS400, and three others with an Inventory system, which communicate back and forth to each other. Each system has one never-ending batch job. The submitting of a job for one request would probably be OK for a few requests, but will hundreds or thousands of request cause a problem? And when something does go wrong, you will have many jobs backed up on the queue. Other will have valid arguments, but I would always argue for the never-ending process using a data queue. I believe IBM has examples of this also, they may call it ‘batch machine’ or something like that.

    Comment


    • #3
      Looking for Data Queue advice.

      Anybody know how to delete data queue entries one by one? the QCLRDTAQ is helpful, unless you have 10,000 queue entries before the one that needs to go out ASAP.

      Comment


      • #4
        Looking for Data Queue advice.

        Uros, I believe the default data queue behavior is to automatically delete any entry that is read. Bill > Anybody know how to delete data queue entries one by one? the QCLRDTAQ is helpful, unless you have 10,000 queue entries before the one that needs to go out ASAP.

        Comment


        • #5
          Looking for Data Queue advice.

          We have been tinkering with using data queues as the mechanism to send data to our "thinner" client programs. We are working on what will become a "model" program (one used as a baseline pattern for new programs), so we want to be sure to get it right. The example we found from IBM appears to submit a job to create and load the data queue for each request. What we've seen discussed on various forums is a "never ending program" that services all users. It looks like we're seeing opposite extremes in these examples, and believe that the right answer for us lies somewhere in between. What I'm looking for is for a recommendation of how to implement this process, and why it would be better than other choices. Thanks for helping, Kevin Silbernagel Linn-Benton-Lincoln Education Service District Albany, Oregon USA

          Comment


          • #6
            Looking for Data Queue advice.

            I have used data queues to develop a utility for a project I am working on and have come across a similar problem. What I ended up doing was to created a "keyed" data queue that held the transactions to process and a controlling data queue that fed keys to process to the program. By doing this, the program processes all transactions for a given user then goes on to the next one. The make up in short is as follows: 1. User logs into the process and acquires the next key from a data area and bumps the key value by 1. 2. The user then processes transactions that are placed in the "keyed" data queue. 3. When the user entry is complete an entry is added to the controlling data queue with the users unique key (7 digit number in my case). 4. The "never ending program" reads the controlling data queue for the next key to process. When one is found all transactions for that key are processed. This seems to work well for us. In our environment, the control of the process is started and ended from a remote system using a DDM data queue setup. The value of "*STOP" is sent to the data queue for the program to come to a normal end. If the user information is required set the "Include sender ID" field to *YES when the data queue is created. Hope this is of some use and good luck!

            Comment

            Working...
            X