26
Fri, Apr
1 New Articles

Practical RPG: Queuing, Part II: Keyed Data Queues

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

 

While using a queue can be as simple as write and read, sometimes you need a little extra, and that's where keyed data queues come in.

 

On the IBM i, it's easy to create a physical file that has no keys and write to and read from that file (it's a little harder in the non-IBM i SQL world, but it can be done). A simple data queue is like an unkeyed physical file: you add records to the file and then read them off in sequential order. You can jump around by relative record number, but we really don't use that technique a lot these days, at least not in production programs. Instead, we key our files and use those keys to access the data.

 

The Key to the Data

You can either key the physical file itself or add a logical view, which provides access to the physical file in the order defined in the logical. Even if you've keyed the physical, you may want to provide alternate access paths to the data by creating more views. For a DDS-defined logical view over a physical file, you create a key by specifying the fields that make up the key. In SQL, you do the same thing by creating a VIEW over your table (remember, on the IBM i, tables and physical files are nearly identical) and specifying the columns. You can do some additional magic using operations on the keys as well, taking substrings or changing the case or converting to numeric.

 

With data queues, you don't have nearly so much flexibility. A non-keyed data queue has a single data element whose length you define when creating the queue. A keyed data queue is almost identical except it has a second element, the key. Like the data element, your only real option when defining the key is to define the length. You then specify the key as an alphanumeric value when you add an entry to the queue.

Creating a Keyed Data Queue

In my previous article, I talked about creating and deleting data queues. Deleting a keyed data queue is exactly the same as deleting a no-keyed data queue, so I won't repeat that part here. Creating a keyed data queue requires only a couple of minor changes.

 

Data queue . . . . . . . . . . .             Name                

Library . . . . . . . . . . .    *CURLIB    Name, *CURLIB      

Type . . . . . . . . . . . . . . *STD         *STD, *DDM          

Maximum entry length . . . . . .               1-64512            

Force to auxiliary storage . . . *NO          *NO, *YES          

Sequence . . . . . . . . . . . . *KEYED       *FIFO, *LIFO, *KEYED

Key length . . . . . . . . . . .               1-256                        

Include sender ID . . . . . . .  *NO          *NO, *YES                    

Queue size:                                   

Maximum number of entries . .  *MAX16MB      Number, *MAX16MB, *MAX2GB    

Initial number of entries . .  16            Number                      

Automatic reclaim . . . . . . .  *NO          *NO, *YES                    

Text 'description' . . . . . . .  *BLANK                                     

                                                                              

Figure 1: After specifying *KEYED as the sequence, the CRTDTAQ command presents a few more parameters.

 

The name is the qualified name of the data queue. Leave *STD as the type, although I hope to delve into the entire concept of DDM data queues another day. In fact, as with the last article, I'm not going to go through all of the keywords in an effort to make you a data queue expert. Instead, I just want to cover the key parameters (no pun intended) that you'll need to use a keyed data queue, and they're pretty minimal. Make sure to specify *KEYED as the sequence, and then the maximum entry length is the length of the data element while the key length is the length of the key element.

Programming for a Keyed Data Queue

Now that your keyed data queue is created, you can program for it. You'll have to use a slightly extended version of the APIs in order to take advantage of the keys. I've included the new prototypes here.

 

     D SendData       PR                 ExtPgm('QSNDDTAQ')

     D   Dtaqnam                     10a   const

     D   Dtaqlib                     10a   const

     D   Dtaqlen                     5p 0 const

     D   Data                            const like(myMessage)

     D   Keylen                       3p 0 option(*nopass) const

     D   Key                               option(*nopass) const like(myKey)

 

     D ReceiveData     PR                 ExtPgm('QRCVDTAQ')

     D   Dtaqnam                     10a   const

     D   Dtaqlib                     10a   const

     D   Dtaqlen                     5p 0

     D   Data                             like(myMessage)

     D   WaitTime                     5p 0 const

     D   Keyorder                    2a   option(*nopass) const

     D   Keylen                       3p 0 option(*nopass) const

     D   Key                               option(*nopass) const like(myKey)

     D   Senderlen                   3p 0 option(*nopass) const

   D   Sender                       1   option(*nopass) const

 

Like the last article, SendData and ReceiveData are the prototypes for QSNDDTAQ and QRCVDTAQ, the two APIs most used with data queues. These prototypes are the same as the prototypes in the other article but with a few additional parameters. For example, the SendData prototype has two additional fields, Keylen and Key, while ReceiveData has five additional parameters, which I'll explain in a moment. But before I do that, let me give you a real-world scenario in which I would use a keyed data queue.

 

Data queues are all about sending and receiving data asynchronously—that is, the sending program can send a message even if the receiving program isn't ready. In fact, if the receiver gets busy, lots and lots of people can send messages and they will be queued up for processing. The receiver can then pop the messages off the queue one at a time and process them as resources are available. This concept works perfectly as long as the data fits within a single message. The maximum message size is about 64KB, so a lot of messages will fit. But unfortunately, not all will fit, especially when we're dealing with more complex transactions such as orders. Take a big order with a few hundred lines and each line is a few hundred bytes and you're well over the limit. So what to do? Well, you could send the transaction on multiple messages, but then you run into the issue of trying to tie all those messages together. And that's where the keyed data queue comes in!

 

Think about this problem: when two or more users are pumping single-message requests into a data queue, sequence and timing don't matter. The processing program pops the next request and processes it. But if the data spans multiple messages, then you can run into problems of interleaving; the queue may have a few records from one request followed by one from a second, followed by another from the first again. Add more requests, the situation gets worst.

 

You avoid this by using two queues, one keyed and one not keyed. First, you design a unique transaction ID, which could be as simple as the next number from a data area. The requester writes all the transaction data to the keyed data queue using the transaction ID as the key. Then and only then does it send the transaction ID in a message to the non-keyed data queue. The processor gets that message, parses out the key, and then uses that key to read the data.

 

Here's the code:

 

     // Loop sending data

     While getNextMessage(myMessage);

       SendData( 'APPDATAQ': 'APPLIB': %size(myMessage): myMessage: %size(myKey): myKey);

     Enddo

     SendData( 'APPREQQ': 'APPLIB': %size(myKey): myKey);

 

 

This snippet assumes that you've already called the routine that fills myKey with the next transactionID. The routine getNextMessage populates myMessage with the next piece of the transaction and returns false if no more exists. If there is data, it's sent to the data queue (APPDATAQ) and then getNextMessage is called again. Once all the data has been loaded onto the queue, you send the transaction ID to the unkeyed request queue.

 

     ReceiveData( 'APPREQQ': 'APPLIB': lenReceived: yourKey: 60);

     If (lenReceived > 0);

       Dou lenReceived > 0;

         ReceiveData('APPREQQ': 'APPLIB': lenReceived: yourMessage: 0:

                     'EQ': %size(yourKey): yourKey: 0: ' ');

         If (lenReceived > 0);

           // Process data record

         Endif;

       Enddo;

     Endif;

 

The receiver sits on the requests queue. When a request is received, it's assumed to be the key to the data queue. The process then reads all the records from the data queue and processes them. No timeout is needed on this read because the data has been loaded ahead of time. The process routine could process the records individually as they are read or could save them all in an array or even a temporary data file. The point is that the processor can reliably get all the data for a transaction and process it together.

 

And that's a good look at my favorite use of keyed data queues!

 

Joe Pluta

Joe Pluta is the founder and chief architect of Pluta Brothers Design, Inc. He has been extending the IBM midrange since the days of the IBM System/3. Joe uses WebSphere extensively, especially as the base for PSC/400, the only product that can move your legacy systems to the Web using simple green-screen commands. He has written several books, including Developing Web 2.0 Applications with EGL for IBM i, E-Deployment: The Fastest Path to the Web, Eclipse: Step by Step, and WDSC: Step by Step. Joe performs onsite mentoring and speaks at user groups around the country. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..


MC Press books written by Joe Pluta available now on the MC Press Bookstore.

Developing Web 2.0 Applications with EGL for IBM i Developing Web 2.0 Applications with EGL for IBM i
Joe Pluta introduces you to EGL Rich UI and IBM’s Rational Developer for the IBM i platform.
List Price $39.95

Now On Sale

WDSC: Step by Step WDSC: Step by Step
Discover incredibly powerful WDSC with this easy-to-understand yet thorough introduction.
List Price $74.95

Now On Sale

Eclipse: Step by Step Eclipse: Step by Step
Quickly get up to speed and productivity using Eclipse.
List Price $59.00

Now On Sale

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: