Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Stumped by SQL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by mwbrock@hotmail.com View Post
    If anyone could help with this, I would certainly appreciate it. I have a CL program that needs to copy data into a temporary table, run a couple of RPG programs to update the table, and then produce summary output using SQL (I am replacing an old RUNQRY with RUNSQLSTM). The problem is that when the job reaches this final step, it hangs in a TIMW status, where it just sits and consumes CPU until I end it.

    If I submit the RUNSQLSTM as a separate job after creating and updating the input table, it runs just fine and produces the summary output as expected. I had a similar problem on a CL I did last week which began to work correctly after I specified ALWCPYDTA(*NO) on the RUNSQLSTM. However, that doesn't work with this one for some reason.

    I have used just about every method one could imagine to create the temporary table, including DYNSQL (from the SEQUEL product, specifying SERVER(*LOCAL)), another RUNSQLSTM to drop, create, and insert into the table, and good old CPYF frmrcd(1). All of this works just as expected, but I get the same results from the final step of creating the summary each time.

    I have commented out the RPG applications that update the data to see if that was the issue...it isn't.

    The CL looks like this:



    RUNSQLSTM SRCFILE(mbsrc) SRCMBR(RESENGS) +
    COMMIT(*NONE) ERRLVL(20)

    ............Drop table mbddl/tempfile;
    ............Create table mbddl/tempfile like mbrock/prodfile;
    ............Insert into mbddl/tempfile
    ............select * from mbrock/prodfile;

    (have also used CPYF, etc instead of the above)

    ...rpg applications run here to update tempfile...

    RUNSQLSTM SRCFILE(mbsrc) SRCMBR(crtsumtbl) +
    COMMIT(*NONE) ERRLVL(20) ALWCPYDTA(*NO)

    ..........drop table mbrock/sumtable;

    ..........create table mbrock/sumtable as (
    ..........select '1' as breaklvl, ' ' as overflow,
    .......... t.fld01, ((t.year*100)+t.month) as proddt,
    .......... sum(t.fld02) as fld0201, sum(t.fld03) as fld0301,
    .......... .
    .......... ...etc for about 40 more fields...
    .......... .
    .......... sum(t.fld43) as fld4301, sum(t.fld44) as fld4401
    .......... from mbddl/tempfile t
    .......... where t.fld01 = 100
    .......... and t.fldxx = '2'
    .......... group by t.fld01, ((t.year*100)+t.month)
    .......... order by t.fld01, proddt )
    .......... with data;


    The SQL is long and ugly because I am feeding a data warehouse and need to use the same field names that RUNQRY has been generating. Also, I needed to substitute the field and table names in this post for client discretion.

    HELP?!?!?
    I didn't get the date this was posted, so the solution may have already been found or submitted, but if what you're trying to do is to get a SQL SELECT to return a temporary table, your best bet might be a User Defined Table Function (UDTF) since that's what it exists to do.

    Hope this helps,
    -
    Steve Shaffar

    Comment


    • #17
      Originally posted by shaffar View Post
      I didn't get the date this was posted, so the solution may have already been found or submitted, but if what you're trying to do is to get a SQL SELECT to return a temporary table, your best bet might be a User Defined Table Function (UDTF) since that's what it exists to do.

      Hope this helps,
      -
      Steve Shaffar
      Thanks Steve. I'm creating a temp file as a copy of a production file and then updating some of the records in the temp file. Then, I'm using that temp file as the input to create a summary file for download by our data warehouse.

      A UDTF would probably work, but we are currently still at V5R4, moving to 6.1 later this year. So until then a UDTF would force the dispatcher to use the CQE, which is not what I want.

      Regards,
      ...Michael

      Comment


      • #18
        Ditch the CL and write an Imbeded SQl program. Create your temp table, insert data, call you RPG programs, summarize data. I have a several programs in production similar to the simplified example below.

        F************************************************* ********
        F* INITIALIZATION
        D************************************************* ********
        C* -- Version: V5R4M0 060210
        C* -- Generated on: 04/30/09 10:18:25
        C* -- Relational Database: XXXXXXXXXX
        C* -- Standards Option: DB2 UDB iSeries
        C************************************************* ********
        C/EXEC SQL
        C+ Set option commit = *none
        C/END-EXEC
        C************************************************* ********
        C/EXEC SQL
        C+ DROP TABLE QTEMP/FTABLE
        C/END-EXEC
        C************************************************* ********
        C/EXEC SQL
        C+ CREATE TABLE QTEMP/FTABLE (
        C+ Field1 CHAR(1),
        C+ Field2 NUMERIC(3, 0))
        C+ Field2 NUMERIC(3, 0))
        C/END-EXEC
        C************************************************* ********
        C/EXEC SQL
        C+ DROP TABLE QTEMP/FSUMMARY
        C/END-EXEC
        C************************************************* ********
        C/EXEC SQL
        C+ CREATE TABLE QTEMP/FSUMMARY (
        C+ Sfield1 CHAR(1),
        C+ Sfield2 NUMERIC(5, 0))
        C/END-EXEC
        C************************************************* ********
        C* Get records and insert into temp file
        C************************************************* ********
        C/EXEC SQL
        C+ INSERT INTO QTEMP/FTABLE (sfield1, sfield2)
        C+ select FDATA1, FDATA2 FROM plibr/FREALDATA
        C+ where Fdata3 = :P1 and Fdata4 = :P2
        C/END-EXEC
        C************************************************* ********
        C* Call RPG Program
        C************************************************* ********
        C Call 'RPGPGM2'
        C PARM FPARM1
        C PARM FPARM2
        C************************************************* ********
        C* Summary Data
        C************************************************* ********
        C/EXEC SQL
        C+ INSERT INTO QTEMP/FSUMMARY(Field1, field2)
        C+ select Field1, sum(field2) from qtemp/FTABLE
        C/END-EXEC
        C************************************************* ********
        C* END PROGRAM
        C************************************************* ********
        C Eval *INLR = *ON
        C************************************************* ********
        C* Initialization Subroutine
        C************************************************* ********
        C *INZSR BEGSR
        C *ENTRY PLIST
        C PARM P1 1
        C PARM P2 8
        C************************************************* ********

        Comment


        • #19
          There ya go "The weave"...No fixed source members to execute RUNSQLSTM.
          Query Management queries are great for simple stuff where you may need to pass a "bind" variable or two...But RPG handles that all much better and with which most are more familiar.

          My only preference would be to use the DECLARE GLOBAL TEMPORARY TABLE statement as opposed to CREATE TABLE QTEMP/xxxxxx because of the flexibility you have in creating a table in QTEMP from whatever columns you have in your "select list" and the ability to populate the table with the SELECT statement when used with "AS":
          DECLARE GLOBAL TEMPORARY TABLE xxxxxx AS (SELECT.....FROM....WHERE...)
          WITH DATA WITH REPLACE

          The select statement can be as simple or complicated as you make it...The end result is a populated table in QTEMP which does not become a table accessible to the database. (Not found in SYSTABLES after you create it.) And it goes away like other QTEMP stuff obviously does when the job creating the QTEMP table ends.

          Comment


          • #20
            Originally posted by efnkay View Post
            There ya go "The weave"...No fixed source members to execute RUNSQLSTM.
            Query Management queries are great for simple stuff where you may need to pass a "bind" variable or two...But RPG handles that all much better and with which most are more familiar.

            My only preference would be to use the DECLARE GLOBAL TEMPORARY TABLE statement as opposed to CREATE TABLE QTEMP/xxxxxx because of the flexibility you have in creating a table in QTEMP from whatever columns you have in your "select list" and the ability to populate the table with the SELECT statement when used with "AS":
            DECLARE GLOBAL TEMPORARY TABLE xxxxxx AS (SELECT.....FROM....WHERE...)
            WITH DATA WITH REPLACE

            The select statement can be as simple or complicated as you make it...The end result is a populated table in QTEMP which does not become a table accessible to the database. (Not found in SYSTABLES after you create it.) And it goes away like other QTEMP stuff obviously does when the job creating the QTEMP table ends.
            Thanks for the heads up on DECLARE GLOBAL TEMPORARY TABLE.

            Comment


            • #21
              My pleasure weave...Also let me just add something even more obvious for everyone...

              DECLARE GLOBAL TEMPORARY TABLE xxxxxx AS
              (SELECT.....FROM....(joins too) WHERE...GROUP BY...ORDER BY...UNION (another SELECT)) WITH DATA WITH REPLACE

              By using the DECLARE GLOBAL TEMPORARY TABLE statement for gathering data, summarizing, sorting, prep'ing for display, or reporting, or export, import from, you name it...

              I have not, I repeat HAVE NOT had the occassion or the need to use Query/400, QM Query's, RUNSQLSTM's, CRTLF command, or CREATE INDEX SQL statement in almost 5 (five) years...Remember before you were touched by the SQL angel, and a user wanted some inquiry or report in some god-forsaken-sort-order that you didn't have a keyed logical for...That's all gone now. Has been for some time.

              The job that creates the "DECLARED GLOBAL TEMPORARY TABLE" can run SQL statements against the QTEMP table...Not in SYSTABLES just means its all yours to do with whatever you need. Don't worry about needing a DROP TABLE statement at some point either.

              Say good-bye to CRTLF and those epic access-path rebuilds. (Of course you may need to CREATE INDEX if your query is building a temporary index...depends. How temporary is it? And how long does it take to build at runtime?)

              Comment


              • #22
                Originally posted by the_weave View Post
                Ditch the CL and write an Imbeded SQl program. Create your temp table, insert data, call you RPG programs, summarize data. I have a several programs in production similar to the simplified example below.
                Thanks for the example, Weave. Oddly enough, it seems nothing is working. Every suggestion (QMQRY, CL program in a new activation group, Embedded SQL, SQL Procedures, rclrsc in the CL,) is all just a different way of doing the same thing...

                I did give a try, though (as well as the declare global temporary table), with the exact same results.

                The problem continues to be that when the temp file is created in step one and then used as input to the summary file in step two, the job hangs in TIMW status. System status shows that it begins page faulting at the same rate it is paging no matter how much memory I allow the system to throw at it.


                Sys Pool Reserved Max ----DB-----
                Pool Size M Size M Act Fault Pages
                1 2716.64 352.58 +++++ .0 .0
                2 1612.44 8.01 157 .0 .0
                3 7521.28 .00 15 1477 1477
                4 2310.09 .08 152 .0 .0
                5 143.03 .00 10 .0 .0


                However...if I build the temp file (any which way I want), and then submit step two as a separate job (reading the temp file and building the summary output), it works just fine and completes in a few minutes without consuming excessive CPU or memory.

                Well, we're supposed to upgrade our dev/test environments to 6.1 soon...that may take care of the problem.

                Comment


                • #23
                  I wonder if the TIMW is happening because of a object lock or something similiar?

                  Comment


                  • #24
                    Originally posted by the_weave View Post
                    I wonder if the TIMW is happening because of a object lock or something similiar?
                    Nope. I've found that the job will eventually finish. Takes about 30 minutes. Submitted as separate jobs, step 1 takes about 2.5 minutes and step 2 takes about 4.

                    Comment


                    • #25
                      Originally posted by mwbrock@hotmail.com View Post
                      Nope. I've found that the job will eventually finish. Takes about 30 minutes. Submitted as separate jobs, step 1 takes about 2.5 minutes and step 2 takes about 4.
                      Weird! Have you tried STRDBG UPDPROD(*YES) and checked the joblog to see if there is any hint? Or perhaps you can use iSeries Navigator to do some SQL monitoring.

                      Comment


                      • #26
                        Originally posted by the_weave View Post
                        Weird! Have you tried STRDBG UPDPROD(*YES) and checked the joblog to see if there is any hint? Or perhaps you can use iSeries Navigator to do some SQL monitoring.
                        Sure did. Nothing abnormal at all, just a longer runtime. SQLSTATE & SQLCODE are both 0, and there is a hard close on the cursor between step 1 and step 2. I do drop and re-create the summary file each time...so I'm going to try clearing it and repopulating instead and see if that does any better.

                        Comment


                        • #27
                          That's what I was going to suggest but you're a step ahead of me.

                          Given the additional info you found, it's sounds like it's contorting on a table created and populated in the process and then re-accessed in the process (at least with that complex break logic), incredibly with all those technologies.

                          After doing what you're doing on checking the drop/build part of it, I would change second SQL to a simple insert * from QTEMP/tbl to determine if just basic re-access or complexity of re-access. As I recall you said there wasn't a huge amount of data but seems like it's having a heck of a time ordering it for the break logic, etc. in a re-access.

                          rd

                          Comment


                          • #28
                            Originally posted by R.Daugherty View Post
                            Given the additional info you found, it's sounds like it's contorting on a table created and populated in the process and then re-accessed in the process (at least with that complex break logic), incredibly with all those technologies.
                            Yup, that's exactly what it's doing. Danged if I know why. Anyway, clearing and populating the table yields the same result. I'm going to go back to running the RPG program for step 1 (which takes 8 minutes), and then run step 2 as a query (replacing a RUNQRY).

                            Comment


                            • #29
                              Wow...This thread has legs...!!! Definitely a contention issue with the "id", or the "self", as-in programs-access-to-files-owned-created-by-self. I'd love to see the Visual Explain explain this... :-) This tends to make me suspect the "missing PTF" theory if that wasn't covered in this thread earlier.

                              Knowing a submitted job gets it's own copy of QTEMP of the submitter, without all the knicks and cuts of existing jobs access "locks", no matter how benevolent they may be...It's gotta be either a contention issue or maybe
                              related to a mystery I've had some time ago.

                              For whatever reason a particular SQL statement I was debugging (because all of the sudden it started to take REAL long to run and complete...) So I'm running the exact statement from STRSQL that runs embedded in RPG in a
                              batch process. BANG! runs so fast I can't see anything. This was a DECLARE GLOBAL TEMP-yada-yada TABLE statement.

                              But when it ran in batch the job indicated yada-yada-IDX...!!! A temporary index was being built when running in batch. What the...run again, again, again. Won't build IDX running in SQL session. Hmmm...So I changed one non-join qualification in the WHERE clause on a date field like (field = date) to (field >= date)...Big deal right? And it's not part of a GROUP BY or ORDER BY clause. BOOM! Now it builds a temp index running interactively. What the...???

                              Didn't resolve it though. I made it go away by writing a better (more
                              efficient) SQL statement to arrive at the same result by optimizing out a file joined in the statement. ie. (Eliminated one of the files used in the join simply to get to a 3rd file join but no fields were in the select-list of the resulting table.) I used EXISTS predicate and sub-select of the file removed from the join, basically "joining" through an EXISTS check instead.

                              Ran the "better" SQL statement in batch and interactively a gazillion-times and neither which-way ever built a temporary index again. Sometimes the beauty is in the eye of the CQE or SQE optimizer...and how he/she thinks is best way to execute your SQL statement. Geat stuff guys...er, I mean contributing persons!!! :-)

                              Comment


                              • #30
                                Yes, it's all in the eye of the optimizer. For whatever reason it's decided that brute force (that is, no index) is best access method given this particular situation.

                                One way to give it a clue is to create the index after the create insert statement, then specify the index for the insert from in the second create. If that doesn't work then there's something about the file indexing that is not re-accessable until finalized when the job ends. Maybe some way to close and reopen the file would finalize it internally mid-job.

                                But what I would do is have a permanent physical file keyed by desired index for this operation and delete * from, then copy the first temporary file into it after it's created, then access the permanent physical file with the second insert from.

                                If necessary you could even lock down the SQL access logic in a package against the permanent file.

                                We just have to give that optimizer a little bit of help.

                                rd

                                Comment

                                Working...
                                X