When I was first exposed to CPF (OS/400's predecessor on the System/38), I was struck by the elegance of its command shell. It was certainly an improvement over that available within my System/34's SSP environment and orders of magnitude better than the old RSTS/E interface on my DEC system.
It has been said many times that OS/400 is a programmer's dream, and I would have to agree with that. Having a command-prompting system that works both at the command line and from within an editor is "sweet." But I have to confess that, once I started working with the Bourne Again Shell (BASH), the default command processor supplied with most of the Linux distributions I've used, I started getting shell envy. There are some things that are a royal pain to do in OS/400 that are trivial exercises in BASH. For example, think about the steps you would take to perform some function over an indeterminate list of objects. You'd have to build the list, using either an OS/400 API or the OUTPUT(*OUTFILE) function of an OS/400 command, then iterate over that list to perform your function on each item. Within BASH, you can easily perform such a task with a "one-liner." We'll look at some examples that demonstrate this function later.
Fortunately, IBM has seen fit to bring some of the functionality of BASH to OS/400 via its QSHELL interface. And if you start to find yourself limited by QSH, you can always install PASE (available as a PRPQ on OS/400 V4R5 but included with V5R1 and later). Spending time in QSHELL (STRQSH or QSH) will allow you to be more comfortable working within a BASH shell and vice versa.
Throughout this article, I'll be referring to BASH, but much of what I say is applicable to QSHELL. Furthermore, wherever you see the word "Linux," you can substitute any of the Unix-like OSes.
A Little Design Philosophy
The biggest culture shock you'll receive when moving from OS/400 to Linux is how files are handled. In OS/400, everything tends toward fixed-length records. Even the green-screen interface is record-oriented--the system sends a record containing the screen format and application data to the terminal; it's displayed to the user for her perusal and possible update; then, when she hits Enter or a command key, a record is sent back to the application program for further processing.
On the other hand, Linux and all the other Unix-like OSes see files as streams of data. Besides the obvious objects that we would recognize as files, such as Java source programs or class files, Linux treats virtually all objects as files. This includes devices, such as the floppy driver or audio device. Such a design requires a little time to absorb, since it's so different from the rigid (not necessarily a bad thing) structure that OS/400 applies to its objects. For example, in Linux you can treat the floppy drive as a file-structured device with a File Allocation Table (FAT) or Virtual File Allocation Table (VFAT) format and use familiar commands to copy files to it. Or you can simply treat it as a piece of magnetic media that holds approximately 1.5 MB and issue a command such as cp myfile /dev/fd0, which would copy the file myfile to the diskette. Subsequent directory commands applied to that particular diskette would result in error messages, since the FAT structure would be destroyed. But the command cp /dev/fd0 myfile would produce the expected results.
Linux will automatically create the following implicit files for each process within your job stream: stdin, stdout and stderr. Most standard Linux commands will, by default, obtain input from stdin, write their output to stdout, and dump error messages to stderr.
These facts would be undeniably underwhelming if it weren't for another design feature called "piping." Piping allows you to direct output (stdout) from one command into the input (stdin) of another. Thus, you can chain commands together to create your desired results. Piping isn't a facility that's unique to Linux. It's available in any of the Windows flavors, and if you care to do some archaeology within your technical library, you'll see that even DOS has the facility (although cruder in implementation).
More Design Philosophy
In a popular episode of M*A*S*H, Charles Emerson Winchester III was admonished to "hurry up" with his surgery. His reply: "I do one thing at a time. I do it very well. And then I move on." The GNU tools that are included with Linux were designed to do one thing at a time and do it very well. Each of these tools has a very specific purpose. Yet, when chained together via the pipe facility, they make an extremely flexible and powerful tool.
Let's look at a simple example using the file that contains user information on a Linux box. The file /etc/passwd contains, among other information, user names that we'd like to extract. Figure 1 shows the results of issuing the cat command on the file.
[klinebl@laptop1 current]$ cat /etc/passwd
vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin
rpc:x:32:32:Portmapper RPC user:/:/sbin/nologin
xfs:x:43:43:X Font Server:/etc/X11/fs:/bin/false
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
Figure 1: The Linux user password file looks like this in its raw form.
Note that the fields are separated by the colon character (:). The cat command's purpose is to copy one or more files whose names have been passed as a parameter to stdout and nothing more. The results of this command are so far unremarkable because we're interested only in the user name, which is in the first field. So let's use the filter cut to extract the user name. Figure 2 shows the first results we obtained from cat, but this time we piped the output of cat to the input of the filter cut, the purpose of which is to "remove sections from each line of files" (from the manual page for cut). The parameter -f1 means field one, and the -d: parameter means that fields are separated by colons. The vertical bar character (|) is used to denote piping from one filter to another.
[klinebl@laptop1 current]$ cat /etc/passwd | cut -f1 -d:
Figure 2: Once filtered through cut, the output of our command string contains only user names but is still in sorted order.
What if you want the results in sorted order? No problem. We'll just pipe the output from cut to the input of another filter, sort. This filter's name is more intuitive; the manual page states: "sort lines of text files." Figure 3 shows the sorted results. You can see how to chain together simple programs, which do very simple and specific tasks, to create very powerful programs.
[klinebl@laptop1 current]$ cat /etc/passwd | cut -f1 -d: | sort
Figure 3: That's better! We now have a list of all of the users defined on our system, and in sorted order, too.
Earlier, I mentioned that I would give an example of performing a task on an indeterminate list of items. Let's use our sorted list of user names and send each of them a greeting.
Figure 4 shows an example using the BASH for control structure, the basic form of which looks like this:
for name [ in word ] ; do list ; done
Here, name represents a variable and [ in word ] represents a list of items. In Figure 4, the commands within the dollar sign/parentheses ($()) are interpreted first. Its output then replaces the commands within the statement, and execution of the for statement is performed on each member of the list.
[klinebl@laptop1 current]$ for u in $(cat /etc/passwd | cut -f1 -d: | sort);do echo Hello, $u!;done
Figure 4: This shows one method to make use of the list we created in Figure 3.
So far, I've only been discussing piping. Another feature, called redirection, allows you to change the definitions of stdin, stdout and stderr. A quick example can be shown by rewriting the command used to produce the results in Figure 3. By typing the code below, we eliminate the need to call cat and, instead, redirect stdin so that the contents of /etc/passwd are sent directly to stdin of the cut command.
cut -f1 -d: < /etc/passwd | sort
The operators for redirecting stdin and stdout are the less-than symbol (<) and the greater-than symbol (>), respectively. Purists will tell you that the latter example is more efficient than the former (and they would be right). But the choice is a mostly matter of taste, since the current machines are so blazingly fast that the additional overhead caused by using the first form will be imperceptible.
Back at the QSHELL Prompt
At this point, you may be wondering how QSHELL would be useful in OS/400. I can explain that in an acronym: IFS (Integrated File System). As much as I love the OS/400 command line QCMD, it seems somewhat anachronistic when dealing with objects in the IFS. I'm so used to working at a BASH command line that most of its commands have been committed to muscle memory. Trying to maneuver around the IFS while in OS/400's shell, using commands like WRKLNK, is downright painful. By issuing the command QSH, I can be back in more familiar territory.
Once the QSH prompt is displayed, you can use the pwd command (print working directory) to see where you are. By default, you should find yourself in the root directory, denoted by the single slash character (/). But you can use OS/400's CHGUSRPRF profile HOMEDIR('/new/home/directory') parameter to change the default directory for any user.
If you issue the command ls, you'll see the contents of the current directory. You can use the cd command (change directory) to move to other subdirectories, just as you would in DOS or Windows, except the forward slash is used instead of the backslash character ().
I've just rattled off three of the commands available to you in QSH. Unfortunately, BASH doesn't have the menu structure of OS/400 (like GO CMDWRK), so you won't be able to use it to learn all of the commands. QSHELL also lacks BASH's apropos command, a facility that provides some of the same information for BASH commands as that available to users in OS/400. Fortunately, IBM does have documentation for QSHELL (available here) that you can use to learn more about QSHELL and its commands and control structures.
Perhaps you still are skeptical about QSHELL because all of your development is in ILE/RPG and you don't find yourself dealing with the IFS very often. That may be true, but you can access your source members via the IFS. Just use this path:
(NOTE: The above should be all one line when used.)
You'll be able to use any of the QSH commands just as though the file were stored in the native IFS. Thus, the member MYRPG in source file BARRY/QRPGLESRC would be accessed this way:
You can test drive this capability by entering the command ls /your/path/to/member/here.
If you still aren't convinced that any of this is interesting, keep in mind that IBM now makes it possible to store RPG source code in the IFS so that you may avail yourself of the newest PC-based editors. Sooner or later, you will find yourself straying onto the IFS's turf--it's just a matter of time. So you may as well get a head start.
Jump into the Stream--The Water Is Fine!
I have given you only the briefest of introductions to BASH and haven't even begun to scratch the surface of the richness of its commands and filters. I do hope that it's enough to pique your interest. You can read more in the documentation for BASH, which is available at the Free Software Foundation's Web Site.
Those of you who are running Linux can immediately begin to experiment with BASH simply by opening a command shell. If you are unfortunate enough to be stuck using a different PC operating system, then all is not lost. There is a BASH environment available free that will load right up on your Windows box. If you point your browser to Red Hat's site, you'll be able to download a product titled Cygwin, which is not only a port of the BASH shell to Windows but also most of the GNU tools. This will allow you to learn BASH, as well as all of those other tools and features, in preparation for your eventual (perhaps even inevitable) migration from Windows to Linux--or from the QCMD environment to the QSH environment.
And finally, I'd like to wish everyone a happy and safe holiday season! See you next year.
Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 20 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He recently co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at firstname.lastname@example.org.