Unconfigured Ad Widget

Collapse

Announcement

Collapse
No announcement yet.

Stumped by SQL

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mwbrock@hotmail.com
    replied
    Originally posted by R.Daugherty View Post
    And, anyway, what is "it" we're talking about? A clearly documented parameter that clearly states that SQL performs faster on live data, that is, when a copy of the data, in this case, 11 million rows, isn't made.
    -------------
    Allow copy of data (ALWCPYDTA)
    Specifies whether a copy of the data can be used in a SELECT statement.

    *OPTIMIZE
    The system determines whether to use the data retrieved directly from the database or to use a copy of the data. The decision is based on which method provides the best performance. If the Commitment control (COMMIT) parameter is not *NONE, the Allow blocking (ALWBLK) parameter should be set to *ALLREAD, when possible, for best performance.
    *YES
    A copy of the data is used only when necessary.
    *NO
    A copy of the data is not used. If a temporary copy of the data is required to perform the query, an error message is returned.
    -------------

    Clearly...

    I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.
    I thought so as well, which is why that was my default. But as you mention, the optimizer is just lines of code that people have written. It was making the less than optimal decision in this case.

    Anyway...I'm happy it works now.

    ...Michael

    Leave a comment:


  • efnkay
    Guest replied
    Whatever the reason it didn't work before, you got it working now. Regardless of the cause, at least nobody suggested you scrap the SQL and do it with native I/O and RPG.
    Note: They still make hand tools if you prefer. But most any tool you can think of now is battery-powered and does the "turning" for you with the press of a button.

    Kinda how I compare SQL I/O to native programming I/O regardless of programming language. And the SQL stays the same regardless of the program language. (Given that different syntax's for same functionality exist in various databases.)

    Leave a comment:


  • R.Daugherty
    replied
    Since it wasn't tried before, the assumption should be that it would have worked before, not been changed or fixed by 6.1.

    And, anyway, what is "it" we're talking about? A clearly documented parameter that clearly states that SQL performs faster on live data, that is, when a copy of the data, in this case, 11 million rows, isn't made. It's much more complicated than that, I'm sure, but we always knew that it was stumbling on re-accessing the just built data and specifying that the data just built was to be accessed versus a copy of it allowed the optimizer to perform as it would if the file was closed out and the second SQL run in a new job.

    I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.

    The optimizer is just code that people wrote, and obviously in complex decsionmaking it will not always make optimal decisions. That's why optimizing hints and locking down the decision in a package are provided for, and fall back to brute force not what I would call an operating system problem.

    Having said that, brock, it was a pleasure sitting back and watching you do all the work. Great to see you have that solved and in production.

    rd
    Last edited by R.Daugherty; 07-20-2009, 08:07 PM.

    Leave a comment:


  • efnkay
    Guest replied
    Don't recall if you tried ALWCPYDTA=*NO before, but that kinda still hints at PTF's that may not have been there or this was a problem fixed in 6.1. Heck anyway! Glad you you were able to put it behind you. Onward and upward.

    Leave a comment:


  • the_weave
    replied
    Originally posted by mwbrock@hotmail.com View Post
    Went to 6.1 over the weekend, re-created the SQL procedures (which is what I'm using for all this now). Same result.

    Finally set option ALWCPYDTA=*NO on step 2...and it works like a champ!

    'Splain that...
    Sounds like a call to Support would be in order.........

    Leave a comment:


  • mwbrock@hotmail.com
    replied
    Finally!!!

    Went to 6.1 over the weekend, re-created the SQL procedures (which is what I'm using for all this now). Same result.

    Finally set option ALWCPYDTA=*NO on step 2...and it works like a champ!

    'Splain that...

    Leave a comment:


  • R.Daugherty
    replied
    He had commented out the RPG calls. They were not an issue.

    The SQL statement works just fine if the statement is submitted to a new job, so nothing about the syntax, data, etc. is an issue.

    It has been narrowed down to subsequent access to a created file in same job with this particular configuration. I didn't see much in the way of efforts at less complex re-accesses to see if they worked as this was required in the end.

    rd

    Leave a comment:


  • garylavery@aol.com
    Guest replied
    Since you are really stuck, even guesses might help...

    1) I run "strings" of SQL statements (fairly complex) from a PC source file. Once in a while a statement will hang because the previous statement seems to "lock" a file. Always an Update statement. In my case a timeout occurs and I get a message. Possibly your RPG program is not releasing resources quickly enough before getting to the problem statement. Sorry I'm not that technical - I just know it happens. I just restart at the stuck statement & the job moves on.

    2) Your "Group By" clause...
    a. Make sure those fields cannot be null (i.e. accidentally).
    b. Try replacing the inline multiplication with a static "2009".

    Again, I'm not that deep technically but the above 2 gotchas in Group By clauses have bitten me more than once and always with very strange symptoms.

    Hope this helps - Gary

    Leave a comment:


  • mwbrock@hotmail.com
    replied
    Originally posted by R.Daugherty View Post
    One way to give it a clue is to create the index after the create insert statement, then specify the index for the insert from in the second create. If that doesn't work then there's something about the file indexing that is not re-accessable until finalized when the job ends. Maybe some way to close and reopen the file would finalize it internally mid-job.
    Tried creating the index over the temp file before populating it as part of step 1. Same result. I've inserted a RCLRSC between the steps, so that should've closed the file. SQL monitor shows a hard close of the cursor as well.


    Originally posted by R.Daugherty View Post
    But what I would do is have a permanent physical file keyed by desired index for this operation and delete * from, then copy the first temporary file into it after it's created, then access the permanent physical file with the second insert from.
    Trying something similar...crtdupobj of the original physical, keyed file into qtemp and using that as my temp file. Perhaps that will work...I'd hate to copy the temp file into a shell a second time...that's double the time to accomplish. We're talking about 11mm records.



    Originally posted by R.Daugherty View Post
    We just have to give that optimizer a little bit of help.

    rd

    Not for what it costs. :-)

    Leave a comment:


  • R.Daugherty
    replied
    Yes, it's all in the eye of the optimizer. For whatever reason it's decided that brute force (that is, no index) is best access method given this particular situation.

    One way to give it a clue is to create the index after the create insert statement, then specify the index for the insert from in the second create. If that doesn't work then there's something about the file indexing that is not re-accessable until finalized when the job ends. Maybe some way to close and reopen the file would finalize it internally mid-job.

    But what I would do is have a permanent physical file keyed by desired index for this operation and delete * from, then copy the first temporary file into it after it's created, then access the permanent physical file with the second insert from.

    If necessary you could even lock down the SQL access logic in a package against the permanent file.

    We just have to give that optimizer a little bit of help.

    rd

    Leave a comment:


  • efnkay
    Guest replied
    Wow...This thread has legs...!!! Definitely a contention issue with the "id", or the "self", as-in programs-access-to-files-owned-created-by-self. I'd love to see the Visual Explain explain this... :-) This tends to make me suspect the "missing PTF" theory if that wasn't covered in this thread earlier.

    Knowing a submitted job gets it's own copy of QTEMP of the submitter, without all the knicks and cuts of existing jobs access "locks", no matter how benevolent they may be...It's gotta be either a contention issue or maybe
    related to a mystery I've had some time ago.

    For whatever reason a particular SQL statement I was debugging (because all of the sudden it started to take REAL long to run and complete...) So I'm running the exact statement from STRSQL that runs embedded in RPG in a
    batch process. BANG! runs so fast I can't see anything. This was a DECLARE GLOBAL TEMP-yada-yada TABLE statement.

    But when it ran in batch the job indicated yada-yada-IDX...!!! A temporary index was being built when running in batch. What the...run again, again, again. Won't build IDX running in SQL session. Hmmm...So I changed one non-join qualification in the WHERE clause on a date field like (field = date) to (field >= date)...Big deal right? And it's not part of a GROUP BY or ORDER BY clause. BOOM! Now it builds a temp index running interactively. What the...???

    Didn't resolve it though. I made it go away by writing a better (more
    efficient) SQL statement to arrive at the same result by optimizing out a file joined in the statement. ie. (Eliminated one of the files used in the join simply to get to a 3rd file join but no fields were in the select-list of the resulting table.) I used EXISTS predicate and sub-select of the file removed from the join, basically "joining" through an EXISTS check instead.

    Ran the "better" SQL statement in batch and interactively a gazillion-times and neither which-way ever built a temporary index again. Sometimes the beauty is in the eye of the CQE or SQE optimizer...and how he/she thinks is best way to execute your SQL statement. Geat stuff guys...er, I mean contributing persons!!! :-)

    Leave a comment:


  • mwbrock@hotmail.com
    replied
    Originally posted by R.Daugherty View Post
    Given the additional info you found, it's sounds like it's contorting on a table created and populated in the process and then re-accessed in the process (at least with that complex break logic), incredibly with all those technologies.
    Yup, that's exactly what it's doing. Danged if I know why. Anyway, clearing and populating the table yields the same result. I'm going to go back to running the RPG program for step 1 (which takes 8 minutes), and then run step 2 as a query (replacing a RUNQRY).

    Leave a comment:


  • R.Daugherty
    replied
    That's what I was going to suggest but you're a step ahead of me.

    Given the additional info you found, it's sounds like it's contorting on a table created and populated in the process and then re-accessed in the process (at least with that complex break logic), incredibly with all those technologies.

    After doing what you're doing on checking the drop/build part of it, I would change second SQL to a simple insert * from QTEMP/tbl to determine if just basic re-access or complexity of re-access. As I recall you said there wasn't a huge amount of data but seems like it's having a heck of a time ordering it for the break logic, etc. in a re-access.

    rd

    Leave a comment:


  • mwbrock@hotmail.com
    replied
    Originally posted by the_weave View Post
    Weird! Have you tried STRDBG UPDPROD(*YES) and checked the joblog to see if there is any hint? Or perhaps you can use iSeries Navigator to do some SQL monitoring.
    Sure did. Nothing abnormal at all, just a longer runtime. SQLSTATE & SQLCODE are both 0, and there is a hard close on the cursor between step 1 and step 2. I do drop and re-create the summary file each time...so I'm going to try clearing it and repopulating instead and see if that does any better.

    Leave a comment:


  • the_weave
    replied
    Originally posted by mwbrock@hotmail.com View Post
    Nope. I've found that the job will eventually finish. Takes about 30 minutes. Submitted as separate jobs, step 1 takes about 2.5 minutes and step 2 takes about 4.
    Weird! Have you tried STRDBG UPDPROD(*YES) and checked the joblog to see if there is any hint? Or perhaps you can use iSeries Navigator to do some SQL monitoring.

    Leave a comment:

Working...
X