18
Sat, May
7 New Articles

The Truth About RPG Performance Coding Techniques

RPG
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

When it comes to performance tuning, make sure you concentrate your efforts where they'll make the most difference.

Brief: Most programmers have a portfolio of techniques they believe will make their code perform better. The tests presented in this article show that while most of the standard tricks have an effect, in many cases it is so small that it becomes unimportant. By studying the results presented in this article, you'll be able to make the correct trade-offs between performance and readability of your RPG/400 code. You'll also be able to concentrate your efforts where they'll have the most impact.

There are many alternatives for improving performance on the AS/400 and many are surrounded by rumor and conjecture. In this and upcoming articles in Midrange Computing, I'll explore alternatives that work and those that don't. I won't discuss any of the performance tools; they are after-the-fact solutions. Instead, we'll talk about what you should do when you are designing and coding.

This article concentrates on a group of questions about performance that I hear frequently. These questions all take the general format: Is it better to code this way or that way in RPG? In most cases, there is a very easy answer-it doesn't matter! Almost all the tricks we've learned for writing faster RPG code do make the code perform better. But the difference on the current generation of AS/400s is usually so small that you'd have to process millions of records before you'd really notice the effect.

When I show you some performance tests that I've run, you'll see that, in most of these cases, the coding techniques matter only to the programmer-no user will ever be able to measure the impact on performance. I ran the tests on my Model D02 AS/400. (Refer to the "Testing Methodology" sidebar for details about how the tests were run.) The D02 is not the slowest of models, but it's close. If you have an F95, your CPU is about 50 times faster than the numbers I'm going to show you.

Determining how many seconds your system is actually going to take is not the purpose of these tests. What I want to show you is the relative performance difference of one method versus another. Unless the tests show a significant difference, one method versus another isn't worth considering. The raw test data is presented on page 92-the numbers correspond to the numbered information within the text. In most cases, I have used 50,000 iterations for each test; but I've made some exceptions, particularly in the array processing tests. The number of iterations is included for each test.

Some easily implemented techniques do have a significant impact on performance; I'll point them out so you can concentrate your resources where they'll make the most difference. I've grouped the results of my tests into three sections: No significant performance impact; Significant impact under certain circumstances; and Significant performance impact.

No Significant Performance Impact

1. Moving a small number of characters versus a large number.

If you read 50,000 records and perform a MOVE operation for every record, the differential between moving one character and 10 characters on my D02 is .3 seconds. On an F95, you would have to read 10 million records (not 50,000) and perform the move on each record before you generated a one-second differential. Moving a little bit of data doesn't cost very much but, as you can see from the test results, the difference between moving one character and 5,000 is worth considering.

Conclusion: No significant impact, even with millions of records.

2. Testing one character versus eight characters.

This test addresses the question: what does it cost to test for a one-character condition...

SWITCH IFEQ *ON

...versus an eight-byte switch?

FIELD1 IFEQ 'ABCDEFGH'

Testing the one-byte switch 50,000 times increased the CPU time of the base case by less than one-tenth of a second. Although the test for eight bytes can be measured (it adds 1.5 seconds over 50,000 records), the difference is so small that it almost doesn't matter. In general, the system is very efficient when testing and moving one-byte fields. But even movement of larger fields is relatively efficient. If you are only going to execute the operation 100 times a day, find something else to worry about.

Conclusion: Some impact, only when massive numbers of transactions are involved.

3. Rearranging date fields using MOVE exclusively versus using a data structure.

Rearranging a date format from MMDDYY into YYMMDD is typical of many applications. I compared two ways to do this: rearranging the components with a series of four MOVE and MOVEL instructions (1) and rearranging the fields using two MOVE instructions with data structure subfields (2). Another solution that some people use involves multiply or divide operations. This is generally a poor performer (see Example 11).

Rearranging a date format from MMDDYY into YYMMDD is typical of many applications. I compared two ways to do this: rearranging the components with a series of four MOVE and MOVEL instructions (Figure 1) and rearranging the fields using two MOVE instructions with data structure subfields (Figure 2). Another solution that some people use involves multiply or divide operations. This is generally a poor performer (see Example 11).

For 50,000 iterations, the difference between the two methods was only .3 seconds with the data structure method holding the edge. I'd say "so what?" unless you intend to execute this function millions of times per day. Choose whichever method you find clearer. Data structures help reduce the total number of operations you need to perform; so if you are assembling multiple fields, they begin to pay off.

Conclusion: Data structures are slightly better, but choose the solution you find clearest.

4. MOVEs versus arithmetic operations.

One of the most pervasive performance beliefs is that there is a significant difference in using arithmetic versus moving data. In truth, there is some difference, but very little to shout about. On 50,000 transactions, the difference between MOVE and Z-ADD for a five-position field is only .3 seconds on my D02. RPG operates on packed-decimal data to do arithmetic, and simple add or subtract operations are very efficient on the AS/400. The answer is not the same for multiply or divide (see test 11).

Conclusion: Both methods are efficient.

Significant Under Certain Circumstances

The next group of tests shows some clear performance benefits to choosing one method over the other. Just be sure to weigh the benefits against the cost. Minimum CPU time is an inexpensive commodity compared to the cost of maintaining code that is not clear. So bear in mind the evidence I'm presenting, but don't forget to consider other factors in your program design.

5. In-line code versus subroutines.

Although RPG subroutines (EXSR) are a very good coding technique, my tests show that subroutines do have a price. Executing an empty subroutine (no calculations in the subroutine) 50,000 times requires about .7 seconds above the base-case processing time of 3.1 seconds. Executing a second empty subroutine takes an additional .7 seconds. The difference was more than I expected, but not enough to make me want to give up on subroutines. If you have something very simple to do and you need to do it many times, consider coding it in-line. I still like subroutines from a programming design viewpoint.

Conclusion: Subroutines are not free.

6. Odd-length numeric fields versus even-length fields.

I'm sure you've all heard that odd-length decimal fields are better than even- length fields for packed-decimal arithmetic. The reason that even-length fields are less efficient is that the compiler generates instructions to zero out the high-order digit (the field is packed so there is an extra digit you don't need). In my tests, I found a difference of .7 seconds between the odd- and even-length fields over 50,000 iterations. In general, I would stick with odd- length fields when odd or even doesn't make any difference. If you need an even-length field for aesthetic reasons, it isn't a bad performer unless you have a lot of calculations.

Conclusion: Use odd-length fields unless circumstances specifically call for an even-length field.

7. Packed versus zoned numeric data for arithmetic operations.

Since RPG performs arithmetic only on packed decimal fields, the compiler generates code to convert a zoned field to packed and then convert back to zoned after performing the operation. Compared to our previous tests, the cost is higher-1.4 seconds-to add a constant to a seven-digit field 50,000 times. I would try to stay away from zoned and binary fields when designing your database, but the performance impact would probably not be noticeable unless you have a lot of calculations.

Conclusion: Use packed decimal fields whenever possible.

8. Using a binary field versus a packed decimal field for an array index.

When you use a field name as an index to move from an array, RPG has to determine where you are moving from and ensure that you are not outside of the array limits. The system converts the index to a binary value and then performs the multiplication (length times index equals the start point); binary is often more efficient for internal functions. Because the internal array handling is done in binary, some people think they should keep the array index in binary form. In fact, the opposite is true. As with all other arithmetic operations, it is more efficient to define an array index as packed decimal. For 99,999 iterations of a loop which increments the index and moves an array element, a binary index consumed 12.1 additional CPU seconds. Because RPG deals externally in packed decimal, it converts every use of a binary field to decimal and then converts back. It doesn't have the smarts to handle the index without causing extra overhead.

Conclusion: Follow the standard packed data conclusion (test 7) for best results.

Significant Performance Impact

The next group of tests point out some coding techniques that can have a significant impact on performance. These results can help you to concentrate resources where they'll make the most difference. Several of the examples in this section deal with array processing. This is because an application which processes an array can easily execute millions of instructions. Bear in mind that we are considering performance benefits measured in seconds over thousands of records. If you're processing a few hundred records, you can probably safely ignore the implications of these tests. Likewise you should not allow these considerations to override maintainability considerations.

9. Binary search versus LOKUP for large arrays.

When you have an array used for lookup, the normal function is to search through the array looking for a match. Unless the most frequently found items are first in the array, you look through half the array on the average. If the search argument does not exist in the array, you look through all the elements of the array. For arrays above 100 elements or applications with a high percentage of unsuccessful searches, this overhead can be reduced by using a binary search algorithm.

The RPG compiler has no syntax for a binary search which would be useful in these circumstances. For my tests, I compared a binary search using the QUSRTOOL Binary Search (BINSEARCH) tool versus an RPG LOKUP operation. BINSEARCH is implemented with standard RPG/400 code that you copy into a program, thus avoiding the need to pass data to and from the search program.

For each test, there were 5,000 search operations against an ordered array (the binary search technique requires that the array be in sequence). No invalid search arguments were submitted, so the search was successful in each case. For 100 three-byte elements, the binary search produced only a .6-second performance improvement; but for 500 elements, the improvement increased to 17.7 seconds.

The binary search method performs better than LOKUP as soon as the number of elements reaches about 100. It would also make sense on a smaller number of elements if the element length was longer or you expected a high percentage of unequal lookups.

Conclusion: Worth the effort for large arrays or if the number of unsuccessful searches is high.

10. Using data structures for array output versus output by array name.

When an array name (the entire array) is specified on the output specs, RPG loops to output each element rather than moving the entire array. Place the array in a data structure and output the data structure name for a significant performance improvement. This is particularly true for arrays that contain many elements.

The data structure answer doesn't change as long as the total array length is the same. In other words, a 10-element, 10-byte array would produce the same results as a 100-element, one-byte array.

A test that outputs a 10-element, 10-byte array 50,000 times shows a 3.5 second difference between the two methods. Change the array to a 100-element, one-byte array and the difference increases to 34.4 seconds (the data structure method shows no increase)! The cost to output a 100-element array using the output specs is very significant-use data structures if you are going to output arrays.

Conclusion: Use data structures to output arrays of any significant size.

11. Using multiply and divide operations.

For this test, 10,000 records were processed using three different sets of calculations with one instruction that varied: a Z-ADD, a MULT or a DIV operation. The Z-ADD test required 9.5 seconds; the DIV test required 29.2 seconds (a difference of 19.7 seconds); and the MULT test took 48.1 seconds (a difference of 38.6 seconds over Z-ADD).

Compared to other arithmetic operations, both multiply and divide are expensive instructions. This is partly due to the fact that multiply and divide are data- dependent in terms of how long they take to execute. The actual data, the length of the fields and the number of decimal positions all have an effect on how these operations perform. Unfortunately, most business applications cannot be designed without using multiplication or division; but keep these results in mind and stay away from multiply and divide if you can or use them infrequently.

Conclusion: Multiply and divide are very expensive instructions-stay away from them whenever you can.

12. Using MOVEA versus moving data one byte at a time.

This technique applies only in specialized circumstances; but when the conditions are right, it can have a significant effect on performance. My test compared using MOVEA to copy a 100-element array (one byte per element) to another array, versus moving data one byte at a time as shown in 3. The difference on 5,000 iterations was 65 seconds. Array handling in RPG is very slow when you start moving one element at a time and you have lots of volume.

This technique applies only in specialized circumstances; but when the conditions are right, it can have a significant effect on performance. My test compared using MOVEA to copy a 100-element array (one byte per element) to another array, versus moving data one byte at a time as shown in Figure 3. The difference on 5,000 iterations was 65 seconds. Array handling in RPG is very slow when you start moving one element at a time and you have lots of volume.

Conclusion: Use MOVEA for multiple characters rather than single-element indexed MOVEs.

13. Using a character field versus an indexed array element.

This test measures the overhead required to access an indexed array element. The code moved either an array element indexed by a field (ARR,X) or a stand- alone alphanumeric field (FIELDA) into a result field. The difference was only 4.5 seconds for 99,999 iterations of this test, but it can add up rapidly when you deal with arrays.

Conclusion: Consider moving an array element to a field before multiple calculations.

14. Using RPG's built-in scan functions versus a search using an array.

RPG/400 supplies the SCAN and CHEKR op codes to search character fields. These operations replace code you may have previously written using an array of one- byte elements to search for a specific character. The big difference between these op codes and a manual process is that SCAN provides consistent performance regardless of where the search argument is found.

For example, on 5,000 iterations, the SCAN test took 2.1 seconds to find the search argument in position 3; 2.3 seconds to find it in position 25; and 2.6 seconds to find it in position 75. By comparison, the array method actually performed better when the search was satisfied in position 3 (1.9 seconds), but deteriorated quickly (9.8 seconds for position 25 and 27.8 seconds for position 75).

Scanning more than a handful of bytes definitely favors the SCAN op code. SCAN is very good when you only want to scan for a single-byte value (e.g., an *). For an efficient scan that has more capabilities than the built-in function see the "Better Array Processing" sidebar.

CHEKR produces results consistent with the SCAN model. I ran a simple test that searched for the last nonblank position in a 100-byte field-position 80 was the last nonblank in all cases. On 5,000 iterations, CHEKR was 7.6 seconds faster than scanning from the right by moving one byte at a time from an array and checking for a nonblank. CHEKR is a winner for determining the length of the data that exists in a field.

Conclusion: The SCAN and CHEKR operations are generally very good.

15. Comparison of four methods to load arrays.

In some applications, the program creates an array entry for each unique value found in the data. A corresponding array is used for some form of accumulation (for example, accumulate the month-to-date sales for each salesman number). To test this, I had a program read a database file that contained 5,000 records. A field in the record contained one of 175 unique values in random order. An array of 200 elements was created in the program and loaded with the values. A corresponding array counted how many times each value occurred. At the end of the program, the unique value array and the corresponding array were printed. I compared five different methods to load the two arrays:

o In the most straightforward approach, a LOKUP operation searches the entire array. If the target value is not found, the unique value is entered at the next available empty location. This loads the array from front to back.

o Some people like to load the array from the back end because it theoretically reduces the number of elements that have to be compared. I made this my second test case.

o I tried a third case where a random number was generated from the input field and used to directly access the array. The "hashing" algorithm I used performed a calculation with the input data which generated the theoretical array element number where data about this value is stored. If more than one value randomized to the same number extra code was executed to test the next element until either the correct data or an available array element was found.

o A fourth version was also tried where the search argument itself was used as the random number. This approach only works if your search arguments are all digits and do not have a wide range.

o A fifth version was tried using the code from the QUSRTOOL; RPG Alternate Lookup (RPGALTLKP). This allows any data (character or decimal) to be used as a random number. The tool provides a "hashing" algorithm that does not use multiply or divide. It allows a maximum of 2000 unique values to be processed with multiple totals for each unique value. RPGALTLKP is a new tool which is in the August 25, 1993 update to QUSRTOOL. (To receive this update, ask your SE to send a note to QUSRTOOL at RCHASA04.)

The fastest results were attained by using the search value as the array index; but this technique does not work for all applications. The next fastest technique was the one that used the RPGALTLKP tool. The results of this technique will vary depending on your data and number of unique values.

The straightforward approach of loading from the front of the array was next best and was better than the back-to-front technique. Once the array is loaded to the point where most of the values exist, the amount of search time is roughly the same whether you are loading front-to-back or back-to-front. Therefore, the back-to-front method is more efficient only for the case where there are mostly unique values.

The solution that used a random number was fairly efficient. The only part of it that was slow was the generation of the random number which required a divide operation. If you have a different technique to generate a random number, this could be very effective as shown by the RPGALTLKP test.

In my test case, the data was mostly random. There was an average of 57 hits per value (10000 / 175 = 57+). In most cases, data is not truly random (it tends to follow the 80-20 rule). If your popular items are in the first few elements of the array, they are found quickly. The inverse of this tends to happen if you load back-to-front. The program spends a lot of time comparing for seldom-used values. It would take an unusual situation for the back-to- front method to have a payoff.

Conclusion: The best results occur if you can use a decimal field in your data as the index (the fourth case). If you don't have such a field, consider the RPGALTLKP tool.

Blueprint for Performance

In these tests, we have looked at some simple RPG functions and the use of arrays. When you are dealing with simple RPG functions, you'll usually realize very little difference in performance by programming things one way or another. Unless you execute the same line of code millions of times per day, you would probably have to look with a microscope to determine the difference. In general, it is good practice to:

o Use odd-length decimal fields.

o Avoid zoned and binary decimal fields in RPG.

When you have a highly used function:

o Try to operate (move and compare) on one-byte fields.

o If it's trivial, put the code in-line and not in a subroutine.

o Be careful with operations that access indexed array elements.

The performance differences for simple RPG functions are all very small in comparison with the time taken for input/output. There is usually very little to be gained by trying to tighten up the RPG code. If you have a program that isn't performing well, you'd probably be wasting your time to try and make any changes for the simple RPG functions we've discussed here. Unless you execute the same code millions of times per day, you won't see any difference.

As always, there are exceptions. A major exception is multiply and divide. They don't perform very well. Stay away from them if you can or use them infrequently.

The other major exception is array processing. When you deal with arrays, you can easily execute millions of instructions by the use of lookup, searches, or moving data to and from arrays. You have to be careful when using arrays if you have a lot of volume.

For most functions, you should try to write for clarity, maintainability and productivity. If you want to improve performance, the places to concentrate your efforts are:

o When you cause the system to perform disk access.

o When you do a CALL or cause a system function (e.g., writing a subfile).

These have a lot more payoff than worrying about squeezing blood out of a few RPG instructions. In future articles, I'll discuss how you can really make a difference by controlling disk access and system functions. Look for the next installment in this series in the November issue of Midrange Computing.

Jim Sloan is president of Jim Sloan, Inc., a consulting company. Now a retired IBMer, Sloan was a software planner on the S/38 when it began as a piece of paper. He also worked on the planning and early releases of the AS/400. In addition to his software planning job, Jim wrote the TAA Tools that exist in QUSRTOOL. He has been a speaker at COMMON and IBM AS/400 Technical Conferences for many years.

 
 
Performance test results 
 
 
#1 
50,000 Iterations        CPU Seconds 
Base case (no move)          3.1 
Move 1 character             3.1+ 
Move 10 characters           3.4 
Move 100 characters          4.3 
Move 500 characters          8.8 
Move 5000 characters        46.0 
 
 
#2 
50,000 Iterations            CPU Seconds 
Base case                        3.1 
Testing a 1-byte switch          3.1+ 
Testing an 8-byte switch         4.6 
 
 
#3 
50,000 Iterations             CPU Seconds 
MOVE/MOVEL                        3.7 
Move to a data structure          3.4 
 
 
#4 
50,000 Iterations                  CPU Seconds 
Move 5 characters                      3.2 
Z-ADD zero to a 5-digit field          3.5 
 
 
#5 
50,000 Iterations          CPU Seconds 
No subroutine                  3.1 
One null subroutine            3.8 
Two null subroutines           4.5 
 
 
#6 
50,000 Iterations             CPU Seconds 
Add 1 to a 7-digit field          3.6 
Add 1 to a 6-digit field          4.3 
 
 
#7 
50,000 Iterations                    CPU Seconds 
Add 1 to a 7-digit packed field          3.6 
Add 1 to a 7-digit zoned field           5.0 
 
 
 
 
#8 
99,999 Iterations                    CPU Seconds 
Move using packed decimal index          12.7 
Move using binary index                  24.8 
 
 
#9 
5,000 Iterations                     CPU Seconds 
100 elements using normal LOKUP          10.2 
100 elements using binary search          9.6 
 
 
200 elements using normal LOKUP          11.8 
200 elements using binary search         10.5 
 
 
500 elements using normal LOKUP          29.3 
500 elements using binary search         11.6 
 
 
#10 
50,000 Iterations                         CPU Seconds 
10-element array using an array name          10.8 
10-element array using a data structure        7.3 
 
 
100-element array using an array name         41.7 
100-element array using a data structure       7.3 
 
 
#11 
10,000 Iterations           CPU Seconds 
Z-ADD                            9.5 
Divide                          29.2 
Multiply                        48.1 
 
 
#12 
5,000 Iterations          CPU Seconds 
MOVEA                          1.4 
Move one element at a time    66.4 
 
 
#13 
99,999 Iterations                           CPU Seconds 
MOVE                                             8.2 
Move using a field for the array index          12.7 
 
 
#14 
5,000 Iterations                     CPU Seconds 
SCAN with * in position 3                 2.1 
Search via an index into an array         1.9 
 
 
SCAN with * in position 25                2.3 
Search via an index into an array         9.8 
 
 
SCAN with * in position 75                2.6 
Search via an index into an array        27.8 
 
 
CHEKR                                     1.2 
Decrement and compare for nonblank        8.8 
 
 
#15 
10,000 Iterations                               CPU Seconds 
Load the array front to back                        28.2 
Load the array back to front                        37.5 
Load the array by generating a random number        31.9 
Load the array using the search value as the index  11.2 
Load the array using RPGALTLKP                      16.8 
 

SIDEBAR 1

Testing Methodology

I used a separate batch job for each test I ran on a Model D02 AS/400. No other system work was active as the tests were performed. I accessed the CPU time for each job with system job accounting. The QUSRTOOL Print Job Accounting (PRTJOBACG) tool generated a printout of the job accounting results.

Neither IBM nor Midrange Computing have verified or approved these numbers.

The basic test technique uses a series of RPG programs which perform a DO loop a fixed number of times. The base case is the DO loop without any instructions and requires 3.1 seconds for 50,000 iterations on my D02. This represents the overhead to crank up a program and perform the DO loop (it doesn't get any better than this). Each test is a variation of this process.

I used consistent lengths for fields and zero decimal positions when testing the differences between odd- and even-length packed fields and between MOVE and Z-ADD, packed-decimal, binary and zoned-decimal formats.

The subroutine test performed a DO loop without any instructions (the base test), a DO loop with one EXSR instruction and a DO loop with two EXSR instructions. Each subroutine contained no instructions.

Except where specifically noted each test was performed 50,000 times. This repetition makes it possible to see the effect of changing the instructions within the loop. Time was measured in elapsed CPU seconds to the nearest tenth of a second.

SIDEBAR 2

Better Array Processing

Overall, my test cases show that many of the traditional ideas about writing efficient RPG actually have very little real impact. Because of the number of operations required to process an array with many elements, greater caution must be used in coding routines that use arrays. A relatively simple process can quickly generate thousands of operations per transaction. Because any code that involves an array has the potential to impact performance significantly, I've provided some guidelines to help you make your array processing routines more efficient.

Find It Fast

In some cases, the use of SCAN or CHEKR doesn't make sense or would be too inefficient. You can use the following techniques (illustrated in B1) to code a faster search:

In some cases, the use of SCAN or CHEKR doesn't make sense or would be too inefficient. You can use the following techniques (illustrated in Figure B1) to code a faster search:

o Extract multiple characters each time you go through the loop. Moving characters into or out of an array based on a field index is relatively slow. If you move multiple characters at one time to a data structure, you can have in-line code that tests for the search argument.

o Make the array at least one character larger than necessary and place the search argument in the extra position. This avoids the test in the loop for not found in the entire array. If the search is successful, you must check to determine whether it is a real hit or the dummy value added at the end of the array.

o If you are going to compare the extracted value for more than a single character (e.g., both * and /), move the extracted value to a normal field. Use the field for your comparisons (this reduces the number of references to an indexed array). In my tests, I found that this rule was important but that increasing the target field beyond 20 characters did not significantly increase the impact.

The results of the tests I ran to verify the efficiency of these techniques are shown below.

Test results for the tight search loop.

 
5,000 Iterations                          CPU Seconds 
Normal array search found in position 3        1.9 
Fast array search technique                    1.6 
 
 
Normal array search found in position 25       9.8 
Fast array search technique                    5.0 
 
 
Normal array search found in position 75      27.8 
Fast array search technique                   13.1 
 
 
Using a data structure to improve scan performance. 
 
 
10,000 Iterations                  CPU Seconds 
Move 1 character from the array        83.2 
Move 5 characters from the array       48.7 
Move 10 characters from the array      31.6 
Move 20 characters from the array      23.4 
Move 30 characters from the array      21.5 

The Truth About RPG Performance Coding Techniques

Figure 1 Rearranging a Date with MOVE/MOVEL

 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
  C                     MOVELDATE      MMDD    4 
 
  C                     MOVE DATE      YY      2 
 
  C                     MOVELYY        YYMMDD  6 
 
  C                     MOVE MMDD      YYMMDD 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
 
 
 

The Truth About RPG Performance Coding Techniques

Figure 2 Rearranging a Date with a Data Structure

 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
  IYYMMDD      DS 
 
  I                                        1   2 YY 
 
  I                                        3   6 MMDD 
 
  C                     MOVELDATE      MMDD 
 
  C                     MOVE DATE      YY 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
 
 
 

The Truth About RPG Performance Coding Techniques

Figure 3 Moving Data One Byte at a Time

 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
  E                    ARY1      100  1 
 
  E                    ARY2      100  1 
 
  C                     DO   5000 
 
  C                     DO   100       X       30 
 
  C                     MOVE ARY1,X    ARY2,X 
 
  C                     END 
 
  C                     END 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
 

The Truth About RPG Performance Coding Techniques

Figure B1 Tight Search Loop

 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
  E* In this example, the array data is 100 bytes and 5 bytes 
 
  E*      are moved from the array (Move length = 5) 
 
  E* The array should be larger than the data by the -Move length- 
 
  E                    AR1       105  1 
 
  I* Make a data structure with one character subfields 
 
  IDS          DS 
 
  I                                        1   1 CH1 
 
  I                                        2   2 CH2 
 
  I                                        3   3 CH3 
 
  I                                        4   4 CH4 
 
  I                                        5   5 CH5 
 
  C* Move the character you are looking for into the first position 
 
  C*   after the actual scan area in the array 
 
  C                     Z-ADD101       IX      30       Array + 1 
 
  C                     MOVEA'*'       AR1,IX           Move an * 
 
  . 
 
  . 
 
  C* Your point where you access the next set of data 
 
  C           FILL      TAG                             Get nxt data 
 
  . 
 
  . 
 
  C* Fill the array from your data 
 
  C                     MOVEAXXXX      AR1,1            Move to arr 
 
  C* Initialize the array index to allow add of -Move length- 
 
  C                     Z-ADD-4        IX      30       Inlz index 
 
  C           LOOP      TAG                             Loop point 
 
  C* Add the -Move length- to the array index 
 
  C                     ADD  5         IX               Bump index 
 
  C* Move from the array to the data structure (5 bytes) 
 
  C                     MOVEAAR1,IX    DS               Move frm arr 
 
  C* Select for the character you want (* in this case) 
 
  C*  If you were searching for more than one character, 
 
  C*    you need more WHEQ statements. 
 
  C                     SELEC                           Select 
 
  C           CH1       WHEQ '*'                        1st byte 
 
  C                     Z-ADDIX        FX      30       Found index 
 
  C* If you want to move CH1 to a common field, do so here 
 
  C           CH2       WHEQ '*'                        2nd byte 
 
  C           1         ADD  IX        FX               Found index 
 
  C           CH3       WHEQ '*'                        3rd byte 
 
  C           2         ADD  IX        FX               Found index 
 
  C           CH4       WHEQ '*'                        4th byte 
 
  C           3         ADD  IX        FX               Found index 
 
  C           CH5       WHEQ '*'                        5th byte 
 
  C           4         ADD  IX        FX               Found index 
 
  C                     OTHER                           Other char 
 
  C* The character does not exist in the bytes moved 
 
  C                     GOTO LOOP                       Loop back 
 
  C                     END                             Select 
 
  C* If the -Found index- has found the character you placed in 
 
  C*   the array, the array is complete. Loop back. 
 
  C           FX        CABEQ101       FILL             If spcl char 
 
  C* 
 
  C* The * was found at the position of the FX index 
 
  C* 
 
  C*                  Your processing 
 
  C* 
 
  C* Bump past the position where the character was found. 
 
  C*   Reset the array index to allow for add of move length 
 
  C*   Loop back 
 
  C                     ADD  1         FOUND   30       Bump index 
 
  C           FX        SUB  4         IX               Decrement 
 
  C                     GOTO LOOP                       Loop back 
 
  ... 1 ...+... 2 ...+... 3 ...+... 4 ...+... 5 ...+... 6 ...+... 7 ...+... 
 
BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$0.00 Raised:
$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: