Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Stumped by SQL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by R.Daugherty View Post
    One way to give it a clue is to create the index after the create insert statement, then specify the index for the insert from in the second create. If that doesn't work then there's something about the file indexing that is not re-accessable until finalized when the job ends. Maybe some way to close and reopen the file would finalize it internally mid-job.
    Tried creating the index over the temp file before populating it as part of step 1. Same result. I've inserted a RCLRSC between the steps, so that should've closed the file. SQL monitor shows a hard close of the cursor as well.


    Originally posted by R.Daugherty View Post
    But what I would do is have a permanent physical file keyed by desired index for this operation and delete * from, then copy the first temporary file into it after it's created, then access the permanent physical file with the second insert from.
    Trying something similar...crtdupobj of the original physical, keyed file into qtemp and using that as my temp file. Perhaps that will work...I'd hate to copy the temp file into a shell a second time...that's double the time to accomplish. We're talking about 11mm records.



    Originally posted by R.Daugherty View Post
    We just have to give that optimizer a little bit of help.

    rd

    Not for what it costs. :-)

    Comment


    • #32
      Since you are really stuck, even guesses might help...

      1) I run "strings" of SQL statements (fairly complex) from a PC source file. Once in a while a statement will hang because the previous statement seems to "lock" a file. Always an Update statement. In my case a timeout occurs and I get a message. Possibly your RPG program is not releasing resources quickly enough before getting to the problem statement. Sorry I'm not that technical - I just know it happens. I just restart at the stuck statement & the job moves on.

      2) Your "Group By" clause...
      a. Make sure those fields cannot be null (i.e. accidentally).
      b. Try replacing the inline multiplication with a static "2009".

      Again, I'm not that deep technically but the above 2 gotchas in Group By clauses have bitten me more than once and always with very strange symptoms.

      Hope this helps - Gary

      Comment


      • #33
        He had commented out the RPG calls. They were not an issue.

        The SQL statement works just fine if the statement is submitted to a new job, so nothing about the syntax, data, etc. is an issue.

        It has been narrowed down to subsequent access to a created file in same job with this particular configuration. I didn't see much in the way of efforts at less complex re-accesses to see if they worked as this was required in the end.

        rd

        Comment


        • #34
          Finally!!!

          Went to 6.1 over the weekend, re-created the SQL procedures (which is what I'm using for all this now). Same result.

          Finally set option ALWCPYDTA=*NO on step 2...and it works like a champ!

          'Splain that...

          Comment


          • #35
            Originally posted by mwbrock@hotmail.com View Post
            Went to 6.1 over the weekend, re-created the SQL procedures (which is what I'm using for all this now). Same result.

            Finally set option ALWCPYDTA=*NO on step 2...and it works like a champ!

            'Splain that...
            Sounds like a call to Support would be in order.........

            Comment


            • #36
              Don't recall if you tried ALWCPYDTA=*NO before, but that kinda still hints at PTF's that may not have been there or this was a problem fixed in 6.1. Heck anyway! Glad you you were able to put it behind you. Onward and upward.

              Comment


              • #37
                Since it wasn't tried before, the assumption should be that it would have worked before, not been changed or fixed by 6.1.

                And, anyway, what is "it" we're talking about? A clearly documented parameter that clearly states that SQL performs faster on live data, that is, when a copy of the data, in this case, 11 million rows, isn't made. It's much more complicated than that, I'm sure, but we always knew that it was stumbling on re-accessing the just built data and specifying that the data just built was to be accessed versus a copy of it allowed the optimizer to perform as it would if the file was closed out and the second SQL run in a new job.

                I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.

                The optimizer is just code that people wrote, and obviously in complex decsionmaking it will not always make optimal decisions. That's why optimizing hints and locking down the decision in a package are provided for, and fall back to brute force not what I would call an operating system problem.

                Having said that, brock, it was a pleasure sitting back and watching you do all the work. Great to see you have that solved and in production.

                rd
                Last edited by R.Daugherty; 07-20-2009, 08:07 PM.

                Comment


                • #38
                  Whatever the reason it didn't work before, you got it working now. Regardless of the cause, at least nobody suggested you scrap the SQL and do it with native I/O and RPG.
                  Note: They still make hand tools if you prefer. But most any tool you can think of now is battery-powered and does the "turning" for you with the press of a button.

                  Kinda how I compare SQL I/O to native programming I/O regardless of programming language. And the SQL stays the same regardless of the program language. (Given that different syntax's for same functionality exist in various databases.)

                  Comment


                  • #39
                    Originally posted by R.Daugherty View Post
                    And, anyway, what is "it" we're talking about? A clearly documented parameter that clearly states that SQL performs faster on live data, that is, when a copy of the data, in this case, 11 million rows, isn't made.
                    -------------
                    Allow copy of data (ALWCPYDTA)
                    Specifies whether a copy of the data can be used in a SELECT statement.

                    *OPTIMIZE
                    The system determines whether to use the data retrieved directly from the database or to use a copy of the data. The decision is based on which method provides the best performance. If the Commitment control (COMMIT) parameter is not *NONE, the Allow blocking (ALWBLK) parameter should be set to *ALLREAD, when possible, for best performance.
                    *YES
                    A copy of the data is used only when necessary.
                    *NO
                    A copy of the data is not used. If a temporary copy of the data is required to perform the query, an error message is returned.
                    -------------

                    Clearly...

                    I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.
                    I thought so as well, which is why that was my default. But as you mention, the optimizer is just lines of code that people have written. It was making the less than optimal decision in this case.

                    Anyway...I'm happy it works now.

                    ...Michael

                    Comment

                    Working...
                    X