Not for what it costs. :-)
Not for what it costs. :-)
1) I run "strings" of SQL statements (fairly complex) from a PC source file. Once in a while a statement will hang because the previous statement seems to "lock" a file. Always an Update statement. In my case a timeout occurs and I get a message. Possibly your RPG program is not releasing resources quickly enough before getting to the problem statement. Sorry I'm not that technical - I just know it happens. I just restart at the stuck statement & the job moves on.
2) Your "Group By" clause...
a. Make sure those fields cannot be null (i.e. accidentally).
b. Try replacing the inline multiplication with a static "2009".
Again, I'm not that deep technically but the above 2 gotchas in Group By clauses have bitten me more than once and always with very strange symptoms.
Hope this helps - Gary
He had commented out the RPG calls. They were not an issue.
The SQL statement works just fine if the statement is submitted to a new job, so nothing about the syntax, data, etc. is an issue.
It has been narrowed down to subsequent access to a created file in same job with this particular configuration. I didn't see much in the way of efforts at less complex re-accesses to see if they worked as this was required in the end.
Went to 6.1 over the weekend, re-created the SQL procedures (which is what I'm using for all this now). Same result.
Finally set option ALWCPYDTA=*NO on step 2...and it works like a champ!
Don't recall if you tried ALWCPYDTA=*NO before, but that kinda still hints at PTF's that may not have been there or this was a problem fixed in 6.1. Heck anyway! Glad you you were able to put it behind you. Onward and upward.
Since it wasn't tried before, the assumption should be that it would have worked before, not been changed or fixed by 6.1.
And, anyway, what is "it" we're talking about? A clearly documented parameter that clearly states that SQL performs faster on live data, that is, when a copy of the data, in this case, 11 million rows, isn't made. It's much more complicated than that, I'm sure, but we always knew that it was stumbling on re-accessing the just built data and specifying that the data just built was to be accessed versus a copy of it allowed the optimizer to perform as it would if the file was closed out and the second SQL run in a new job.
I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.
The optimizer is just code that people wrote, and obviously in complex decsionmaking it will not always make optimal decisions. That's why optimizing hints and locking down the decision in a package are provided for, and fall back to brute force not what I would call an operating system problem.
Having said that, brock, it was a pleasure sitting back and watching you do all the work. Great to see you have that solved and in production.
Last edited by R.Daugherty; 07-20-2009 at 08:07 PM.
Whatever the reason it didn't work before, you got it working now. Regardless of the cause, at least nobody suggested you scrap the SQL and do it with native I/O and RPG.
Note: They still make hand tools if you prefer. But most any tool you can think of now is battery-powered and does the "turning" for you with the press of a button.
Kinda how I compare SQL I/O to native programming I/O regardless of programming language. And the SQL stays the same regardless of the program language. (Given that different syntax's for same functionality exist in various databases.)
Allow copy of data (ALWCPYDTA)
Specifies whether a copy of the data can be used in a SELECT statement.
The system determines whether to use the data retrieved directly from the database or to use a copy of the data. The decision is based on which method provides the best performance. If the Commitment control (COMMIT) parameter is not *NONE, the Allow blocking (ALWBLK) parameter should be set to *ALLREAD, when possible, for best performance.
A copy of the data is used only when necessary.
A copy of the data is not used. If a temporary copy of the data is required to perform the query, an error message is returned.
I thought so as well, which is why that was my default. But as you mention, the optimizer is just lines of code that people have written. It was making the less than optimal decision in this case.I don't particularly see this as greatly surprising. There is also ALWCPYDTA *OPTIMIZE which may also have worked perhaps even better.
Anyway...I'm happy it works now.