Practical DB2: DB2 Services for OUTQ Management

  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

DB2 continues to provide more services that allow us to more easily manage our IBM i infrastructure.

I've been doing a lot more infrastructure-related tasks lately. These are not naturally my forte, and I can often use a little help from my friends. For example, just recently I reached out to the mailing list for some insight on performance tuning. If you haven't heard of, then just stop right now and click on the link above. Don't worry, I'll be here when you get back. And if you have heard of, then just read on as I try to do my best to pay it forward with what I've learned about a couple of DB2 services: OUTPUT_QUEUE_INFO and OUTPUT_QUEUE_ENTRIES.

Cleanup on Aisle O!

Aisle O is the metaphorical location where all your output queues reside. Most likely, you've got output queues in several real locations (also known as libraries), but the beauty of the DB2 spool services is that they allow you access them however you need to—by library or across your entire system.

If you're like me (I heard that—"old and cantankerous” indeed!), you've probably used a variety of techniques to manage the inevitable accumulation of spooled files on your system. For example, there are public domain utilities out there like DLTOLDSPLF that have been around forever. The problem is that some of them have indeed been around forever, and they have quirks, such as a four-digit spooled file number, which causes grief on a modern system when you have more than 10,000 spooled files in a job. Guess how I know…

Another one of my favorites is the DLTEXPSPLF command. If you're unfamiliar with the command, you may want to do a little Google searching. Who knows, maybe I'll write an updated article on it one of these days! But just so you know, the concept of this particular command is that you can assign "expiration dates" to spooled files. Typically, you would assign that expiration date at time of creation by assigning expiration parameters to the printer file, but you can also change the expiration date after the fact. Once you have an expiration date established, management is as easy as running a nightly job that runs DLTEXPSPLF to delete any expired spooled files.

For a lot more information on managing spooled files and job resources in general, you may want to review the IBM Redpaper on the subject, available here.

Enter the DB2 Services

The DB2 services allow me to manage my spooled files in a way that's pretty intuitive to me. As an example, I can run a query to review all my output queues, and based on some sort of threshold, I can then further inquire into the actual spooled files for that output queue. Or, alternatively, I might want to list all the spooled files for a group of users (like those crazy folks over in accounting and their desire to keep trial balance information from the previous decade). WRKSPLF is good for a single user, but the spooled file services let me slice and dice that query for any number of users. It's pretty amazing. Such power, though, does not come without a price. Specifically, if I run a query using a broad scope, I may need to wait a while for my results. The services need to step through a lot of data in order to provide me with those more ambitious queries.

Let's look at the specific services I'm talking about. As I mentioned at the beginning, the DB2 spool services have the two basic components OUTPUT_QUEUE_INFO and OUTPUT_QUEUE_ENTRIES. The OUTPUT_QUEUE_INFO service can be used to provide aggregate information about output queues, and I use it most often when trying to select output queues that have a lot of entries. Another possible use is to identify output queues that should normally not have any entries (or at most a very few) because they are supposed to print immediately (think label printer). A list of output queues with unexpected entries could be used to provide advanced warning of an issue with a printer. The service is well-documented here and is pretty easy to use. It provides a wealth of information, including things I never even knew existed, such as the LDAP publishing status, a field that identifies whether the output queue is published via LDAP. But most specifically I use it to find output queues with enough entries to make them a problem. Here's a simple one to find large queues:


This query creates a list of all the output queues with more than 1000 spooled files. Add the order clause ORDER BY NUMBER_OF_FILES DESC, and this list is sorted by number of files, with the most populated queues first. Using this, I can identify problem output queues and then drill down using the OUTPUT_QUEUE_ENTRIES service.

The OUTPUT_QUEUE_ENTRIES service is simple enough to use (see the documentation here), but you should be warned that it could run a while, especially if your query is exceedingly expansive. But as long as you're willing to live with that particular idiosyncrasy, the service can be very powerful. This, for example, is one of my favorite queries:


FROM OUTPUT_QUEUE_ENTRIES                      


This query gives you a list of users sorted by the number of spooled files each one owns and includes the timestamp when they last created one. When run on one of my systems, I found users who hadn't created a spooled file in over a year who had thousands of files on the system. Subsequent research showed that these were "historical" reports that they kept around "just in case." So we had not only all those extra spooled files, but also users on the system who had long since left the building but whose user profiles were still hanging around solely to keep those spooled files.

Needless to say, these sorts of queries can be extremely useful when managing your spooled file population. A lot more issues need to be addressed, however, when it comes to spooled file management. For example, a big issue is the overall configuration and control of job logs. But hopefully this article introduces one effective tool that can help you in this endeavor.