Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Copying Data Faster

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Copying Data Faster

    ** This thread discusses the article: Copying Data Faster **
    ** This thread discusses the Content article: Copying Data Faster **
    0

  • #2
    Copying Data Faster

    ** This thread discusses the article: Copying Data Faster **
    I'm just curious. Why the need to have 2 underscores when prototyping memcpy? I prototyped it without the underscores and it works fine. Also, the C runtime library functions doesn't mention the underscores. Guust
    Code

    Comment


    • #3
      Copying Data Faster

      ** This thread discusses the article: Copying Data Faster **
      I'd also like to see the IRP behind RPGLE. I haven't found a way in debug to see, or better yet to set, the values for %EOF, %FOUND, and the like. This would be invaluable for testing; for example, you could force an EOF condition to get out of a loop. The alternative is to go back to coding the indicator on the READxx or CHAIN, and have the loop test the indicator instead of the built-in funciton. Seems like a step backwards. I suppose you could write it both ways, and use conditional compilation. But that seems like a lot of extra work, for something that should be simple to do.

      Comment


      • #4
        Copying Data Faster

        ** This thread discusses the article: Copying Data Faster **
        I've also found some anomalies in Eval versus traditional opcodes. My method was crude; just build some loops, capture some timestamps. But I figure if you do enough iterations, the loop overhead becomes less significant. In most cases the differences are small. But if you do enough operations, it will make a difference in the long run, right? My results (times in microseconds per calculation): X Add Y Z is faster than Z = X + Y (.24 vs. .29) Add Y Z is marginally faster than Z += Y (.22 vs. .24) X Mult Y Z is marginally faster than Z = X * Y (1.25 vs. 1.30) Mult Y Z is the same as Z *= Y (1.33) Sqrt Y Z is far slower than Z = Y ** 0.5 (290 vs 25) I'm thinking that maybe some of the difference is in how Eval handles overflow (i.e., it doesn't), but without access to the IRP, I can't be sure. In most cases, the differences won't be noticeable, unless your job is doing a few million passes through the operation. The only one where it will really make a difference is the square root. Eval is clearly using a different, far more efficient, algorithm.

        Comment


        • #5
          Copying Data Faster

          ** This thread discusses the article: Copying Data Faster **
          > In most cases, the differences won't be noticeable, > unless your job is doing a few million passes > through the operation. As someone who does write code that processes millions of records at a pop, I can tell you that I have never had to change operation codes to optimise the run time (telephone billing.) CPU wasn't a bottleneck even on S/38. I/O is the big one, by orders of magnitude. Benchmarks like this are fun and informative, but why go to the effort of fooling with individual operation codes to shave 1 second off the run time when a change to the access path can result in shaving off hours? --buck

          Comment


          • #6
            Copying Data Faster

            ** This thread discusses the article: Copying Data Faster **
            memcpy is a C runtime function which in turn uses the __memcpy MI builtin. Using __memcpy directly avoids having the procedure call to memcpy. This can be somewhat deduced by looking at QSYSINC/H.STRING where memcpy() is replaced by the __memcpy() builtin in some situations.

            Comment


            • #7
              Copying Data Faster

              ** This thread discusses the article: Copying Data Faster **
              You're right when you're not processing lots of data, a few milliseconds here and there don't matter. I worked with a book distributor recently and when you have to process hundreds of millions of records, every little bit counts. The fact that some compiler developers have said don't worry abou the EVAL or %SUBST or similar functions, your bottle neck is in the I/O... That is just too dissappointing of a response. By changing around the structure and switching to CPYBYTES for one application I reduced it from several hours to about 40 minutes. And that was on an iSeries 820 with 2 CPUs.

              Comment


              • #8
                Copying Data Faster

                ** This thread discusses the article: Copying Data Faster **
                I created a test RPG program to test this. I prototyped the procedure as outlined in the article and created the module. What is the service program that I need to bind to create the program. Sorry if this question sounds silly. Thanks for your help

                Comment


                • #9
                  Copying Data Faster

                  ** This thread discusses the article: Copying Data Faster **
                  Bob: You wrote: "By changing around the structure and switching to CPYBYTES for one application I reduced it from several hours to about 40 minutes." I think you're trying to pull the wool over our eyse. Riddle me this: How much of that improvement was due to CPYBUTES and how much was due to the restructuring? No Bob. Shaving a few microseconds off an assignment statement is not waht earns the bucks these days. What matters is shaving orders of magnitude off the software development process. If I can develop and deploy my applications more quickly, that's more important than the runtime speed. The reality for many companies is that every day late in app deployment means money lost. w

                  Comment


                  • #10
                    Copying Data Faster

                    ** This thread discusses the article: Copying Data Faster **
                    If I can develop and deploy my applications more quickly, that's more important than the runtime speed. Business of course wants both, but it does them no good if the results aren't available when they need them. One of the main problems at my last job was processes running too slowly to finish to meet schedules for delivering results in any number of ways, and ultimately for the business, to be able to sign on in the morning and work. Most of these processes were on new platforms, not the AS/400, and were SQL, not RPG. The AS/400 and RPG just worked, and it worked fast. But IT managers spent full time trying to switch to new processes such as data warehouses, etc. on other platforms. Their metrics were whether the processes finished on schedule. Delays shut down the business. Quick and dirty is quick and dirty development, and slow and dirty execution. rd

                    Comment


                    • #11
                      Copying Data Faster

                      ** This thread discusses the article: Copying Data Faster **
                      Many excellent points there, cdr. And of course you are right. My rule of thumb is that if you are in a debugger for over an hour, you don't understand the problem and sitting there walking through opcodes isn't going to explain it to you. Understanding the code is everything. We had a critical process that was taking longer than the window allowed to meet other schedules. I took a look at it and saw a call to a program in a loop to get a configuration option flag. That was bad enough being in a loop instead of in the init routine, but then I took a look at the program being called. A full blown green screen configuration maintenance program with a parm for running in batch and retrieving the specified configuration value. In this case, it was called in a loop hundreds of thousands of times, but it had been provided by the vendor as a standard for working with configuration options, so the programmer was trying to use vendor standards. I could have replaced with a chain to the configuration file using the same parms passed to the program where it did the chain , but I called a smaller specialized configuration retrieval program to maintain the insulation of logic. And moved it to init. Shaved 15 minutes off a two and half hour process and let us meet the schedule. rd

                      Comment


                      • #12
                        Copying Data Faster

                        ** This thread discusses the article: Copying Data Faster **
                        My rule of thumb is that if you are in a debugger for over an hour, you don't understand the problem and sitting there walking through opcodes isn't going to explain it to you. This is a fair styatement, Ralph. However, in my experience the best programmers are the best debuggers. All programming is an act of translation; translating the concepts in your head (or on a design specification) into language the computer understands. No matter how good a programmer you are, there will be mistakes in the translation, and that is where debugging comes in. And you simply can't debug the code if you don't know how it works. This is why generated code is so scary. It works fine if it is used EXACLTY as intended, but since the guy who wrote the original generator can't know ahead of time what the code will be used for, it's likely that at some point it will be used in a way other than it was intended. At that point, the code will fail, and unless you have excellent debugging skills, you will find yourself in a very difficult situation. Tools help good programmers become more productive, they cannot turn non-programmers into programmers. Joe

                        Comment


                        • #13
                          Copying Data Faster

                          ** This thread discusses the article: Copying Data Faster **
                          I agree about the good programmers being good debuggers, Joe, or quite frankly not needing to debug that much at all. But the generated code thing confuses me, at least if a CASE tool is generating the code. Isn't there usually a debugger at the generation level, that is, at the CASE tool code level, so the programmers using that aren't *supposed* to have to see the generated code? However, some of my fondest memories are going through AS/SET generated RPG and trying to figure out how to manipulate AS/SET to create RPG that didn't have to be modified afterwards. I succeeded, but it was a black art at best. I still didn't go through a debugger for that either, just looked at the inscrutable generated source. But I saw rookies who couldn't catch on to subfiles generate subfile programs with AS/SET, and so it did enable lesser skilled programmers to productively create programs. They were limited to debugging in the CASE tool code they wrote however. They were lost if they had to drop into the generated code, as you point out. rd

                          Comment


                          • #14
                            Copying Data Faster

                            ** This thread discusses the article: Copying Data Faster **
                            quite frankly not needing to debug that much at all. No, you miss my point. ANYBODY can write code if there are no problems. Granted it takes one heck of a stud programmer to hand-write programs with no bugs, but my point is that it's not the code that works that gives you problems, it's the code that doesn't work. The reasons why an excellent programmer has bugs are manifold: the code may not work because you don't understand a new opcode, it may not work because the opcode works differently now (this is the Open Source bugaboo), the code may not work because you simply switched two letters. There are dozens of reasons, but the best debuggers know how to quickly isolate and identify the problem. That's why code generators are so scary. I'm not sure what code generators you're using these days, Ralph, but I'm pretty sure none of the WDSC wizards have "source level" debugging, nor do any of the open source tools that generate things like SOAP wrappers from WSDL files. Check through the list of code generation tools in your favorite iSeries mag and see how many of them even have "source". Many of them are drag-n-drop paradigms with no debugging capabilities whatsoever. But I saw rookies who couldn't catch on to subfiles generate subfile programs with AS/SET, and so it did enable lesser skilled programmers to productively create programs. What it did was allow unqualified individuals to hang on in a position and a career where they didn't belong. It simply prolonged the inevitable, wasting both their time and the company's, and when the axe fell, it always fell on them first. The six months they spent generating mediocre AS/SET programs could have been better spent exploring more suitable career goals. The last thing the industry needs is to enable semi-skilled button pushers to generate mediocre code. Such code often needs to be rewritten, sometimes at a higher cost than writing it from scratch correctly the first time. This is especially true for consulting firms; that's the point where the customer (quite rightly) demands to know what kind of crap they paid for in the first place. Joe

                            Comment

                            Working...
                            X