Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Speed and resources.

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Speed and resources.

    We are still on a 730. Embeded SQL in interactive programs hits our machine performance hard. By the time you get done putting limitations on what it can do, it takes away any advantages of using it. The only time we use it is when we have dynamic data that the users need a wide variety of accesses to and LF's won't work. There was a message that was passed around our shop, I think was attributed to Joe Pluta, that compared and made a pretty serious case that SQL gives worse performance. Maybe if he sees these post's he can respond.

  • #2
    Speed and resources.

    Couple notes 1.Print SQL Information (PRTSQLINF) could provides some info. 2.Check system value QQRYTIMLMT 'Query processing time limit' 3. SELECT DISTINCT FLD1,FLD2,FLD3 FROM XFILE runs much slower then SELECT FLD1,FLD2,FLD3 FROM XFILE GROUP BY FLD1,FLD2,FLD3 when file is big(millions records)with or without indexes.

    Comment


    • #3
      Speed and resources.

      It's really not an RPG question, but rather a data access question. All AS400 / iseries queries call the QQQUERY API. That being said, there is a great deal more equality among the various methods than commonly thought. The concept behind performace lag may be based on the fact that a different compiler command is used to create programs with embedded SQL, and that may indeed be the case. One would have to compare apples with kumquats in order to find out if there is a concern. I have only used embedded SQL for those solutions where I felt it was a best fit. I still use OPNQRYF extensively, and find that the command is powerful enough for nearly everything. The problem with comparing the performance of embedded SQL vs. OPNQRYF is that the applications behind the differences vary. It may be inappropriate to compare the two, because you would have to create an application environment that may favor one over the other. Dave

      Comment


      • #4
        Speed and resources.

        I think David has the right approach. It's all about understanding what the access paths are, how they are created and what the pros/cons are. The biggest factor in using embedded SQL (or OPNQRYF) is to have the right indexes built. For ad-hoc query stuff, I build a separate logical over the file for each field they can sort by, and a separate EVI over those fields, plus any fields that they can select by. As of V3.something, the way indexes are maintained were totally revamped, and the hit for huge numbers of indexes has been all but eliminated, so I don't worry about how many. The performance seems to bear that out. The logicals means that the sorting, at least for the primary field, will be much more efficient. Sometimes I have ideas about the most common sorts for a particular application, and I build even more specific indexes. The EVIs MAY help on the selection critera IF the right conditions exist (I won't go into that here...), but they definately won't hurt. The EVIs are not actually used in the sorting, only in the WHERWE clause. However, they MAY help on the sorting because they contain more detailed statistics about the contents of the sort field than a logical. The optimizer will look at these statistics, and even though it can't use the EVI for sorting, the statistics may help it make a better decision as to what access plan it puts together. -dan

        Comment


        • #5
          Speed and resources.

          Correction Correction: EVI's don't help at all in the WHERWE clause, only in the WHERE clause. For a full discussion of the WHERWE clause, see the Intercal Programmers Guide. -dan

          Comment


          • #6
            Speed and resources.

            > There's a couple of things that have been brought to > my concern about SQL but I've been able to validate > none of them. I've been told that compared to RPG > embedded SQL is a huge resource hog and yields > slower performance. The only correct answer to a performance question is 'It depends.' Joe Pluta is working on a set of open source benchmarks that are intended to help you determine what works best on your own system. It's a work in progress but it will probably give you some good ideas. http://forums.plutabrothers.com/IAAI/ If you create benchmarks of your own, seriously consider sending them to the IAAI to help others with this same question. In addition to Joe's work, consider using whatever performance tools are available for your version of OS400. Look up PEX (performance explorer) in the Infocenter http://www.iseries.ibm.com/infocenter For specific tips on writing good performing SQL, see the DB2 Universal Database for iSeries - Database Performance and Query Optimization guide, also in the Infocenter. --buck

            Comment


            • #7
              Speed and resources.

              While I appreciate all this input and do understand that there times when you would sql and when you would not. As I said specific record access would always be best done in RPG for example (or whatever language you are using). The questions I was looking for an answer to were 1) A way to show the cpu% used in an interactive job (not wrkactjob) that's a little tricky to be sure about. 2) A way to limit the allowed cpu% that an interactive program can use. 3) Any other test that will help validate my scenario. But these have barely been addressed Someone must know and I intend to find out who !

              Comment


              • #8
                Speed and resources.

                > 1) A way to show the cpu% used in an interactive > job (not wrkactjob) that's a little tricky to be sure about. PEX will show this to you, although I must admit that I don't know what knowing the CPU% will tell you since that can vary wildly according to the other workload on the box If you want something more concrete, you want CPU time, DASD time (including latency) and key/think time. PEX will show this to you as well. You can also write your own performace measurements using various APIs. Check out the flight recorder APIs in particular. The job log will show you CPU time. CPU time will not vary much even if the box is loaded down. The manual you'll want to read in some detail is the Work Management Guide. > 2) A way to limit the allowed cpu% that > an interactive program can use. The class description has a maximum CPU time but to my knowledge there is no limit on CPU%. If you wanted to write one you can either walk the WCBT and run APIs to gather job numbers and kill jobs reaching your criteria, or use the work management APIs to emulate WRKACTJOB and do the same thing. > 3) Any other test that will help validate my scenario. The IAAI web site is specifically intended to address this need. --buck

                Comment


                • #9
                  Speed and resources.

                  I will look into your suggestions I appreciate your time. P.S. I'm not looking for total cpu% used I'm looking for cpu% used by a specific, interactive, job. Not the pool and not by using wrkactjob. I'll take a look at the tools you've supplied me with.

                  Comment


                  • #10
                    Speed and resources.

                    The JOBI1000 format of the QUSRJOBI will return carious CPU times and elapsed times for any job. You get CPU% by dividing CPU time by elapsed time. Nothing reports % directly. You could wrap anything you want to test with 2 calls to this API. The first one could have the "reset" parameter set to "yes" - probably a '1', actually - see the docs. The one afterwards could be set to "no".In any case, save the results of the first one and subtract them from those of the second one. The trouble with CPU measurements is, other stuff is going on at the same time. You just will not get the CPU when other higher priority jobs are running. THere are also all kinds of wait conditions, including queueing of the DASD. So try to run your tests in a very dedicated environment - no one else on the machine, no external communications, etc. HTH Vern

                    Comment


                    • #11
                      Speed and resources.

                      There's a couple of things that have been brought to my concern about SQL but I've been able to validate none of them. I've been told that compared to RPG embedded SQL is a huge resource hog and yields slower performance. However, I believe that may be true for specific single record access but for a broad base of records with a filter over top I think SQL should be faster than RPG because it will use built in functions to do it's sorting (like where) which are compiled in a lower level language than RPG. Still I don't know how to prove my claim. So I'm wonder if anyone knows anything about the following: 1) A way to show the cpu% used in an interactive job (not wrkactjob) that's a little tricky to be sure about. 2) A way to limit the allowed cpu% that an interactive program can use. 3) Any other test that will help validate my scenario. thanks for your help - Mark

                      Comment


                      • #12
                        Speed and resources.

                        Look into the CHGQRYA command to help control interactive jobs. This is from the IBM manual (not sure which one, as I searched on CHGQRYA rather than going to a specific manual) Setting the time limit for jobs other than the current job You can set the time limit for a job other than the current job. You do this by using the JOB parameter on the CHGQRYA command to specify either a query options file library to search (QRYOPTLIB) or a specific QRYTIMLMT for that job. Using the time limit to balance system resources After the source job runs the CHGQRYA command, effects of the governor on the target job is not dependent upon the source job. The query time limit remains in effect for the duration of the job or user session, or until the time limit is changed by a CHGQRYA command. Under program control, a user could be given different query time limits depending on the application function being performed, the time of day, or the amount of system resources available. This provides a significant amount of flexibility when trying to balance system resources with temporary query requirements. Several years ago I attended a course on queries given by Skip Marchesani, and he talked about doing something like this. At busy times of the day users have less time for queries and during off hours users would be given more time for their queries. When testing queries in interactive SQL I change the time limit of my job to 0, forcing the query to fail. At another session I look at the job log of the first session to see what the query was trying to do (using existing access paths or creating new ones, suggestions for new access paths etc.). This helps me define the queries that I will later embed in RPG programs; also I can create any indexes that will help the query. We are using much more embedded SQL and with very few exceptions my gut feeling (as I have not tried to measure anything) is that the preformance is at least as good, if not better than, F-spec access. Note that this applies to reports or other processes that require accessing several to many records, generally from more than one file. For single record access (e.g. maintenance programs) I stick with F-specs. Again, my gut feeling on this is that SQL cannot match plain RPG f-specs for this type of access.

                        Comment

                        Working...
                        X