We still have lots of programs that compile at runtime. They use "compile-time" parameters to build programs on the fly that have varying FOR EACH logic to query the data out of the database. We have a large number of legacy programs which gather data in this way. But my understanding is that many years ago that approach went out of favor.
I think the new approach is to use dynamic queries (QUERY-PREPARE) that serve a similar purpose in that they dynamically alter the logic and prepare different sets of data. (Docs: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dvref/query-prepare(-)-method.html ) I think the advantage here is that much more of the code can be compiled ahead of time, and there is less risk of runtime problems in the logic. Also the performance is probably a bit better when using this approach, since only a very small portion of code is evaluated on the fly.
BTW, How long ago did the original approach go out of favor (ie. the dynamic compilation of source code based on runtime conditions)?
In general it has not been straightforward to switch legacy "compile-time" program to the corresponding QUERY-PREPARE. Here are some things I have tried to research but haven't found an answer for:
I suspect there is a KB that contains the answer to some of these questions. Let me know if I'm overlooking something.
Any guidance would be greatly appreciated.
I haven't seen this suggested yet, though I may have missed it:
Put your 200 primary keys into a temp-table and add that to the dynamic query.
DEFINE TEMP-TABLE ttTest NO-UNDO FIELD SomeKey AS CHARACTER. CREATE ttTest. ASSIGN ttTest.SomeKey = "xxx". CREATE ttTest. ASSIGN ttTest.SomeKey = "yyy". DEFINE VARIABLE hQuery AS HANDLE NO-UNDO. CREATE QUERY hQuery. hQuery:ADD-BUFFER(BUFFER ttTest:HANDLE). hQuery:ADD-BUFFER(BUFFER patron:HANDLE). hQuery:QUERY-PREPARE("for each ttTest, each patron no-lock where patron.patronid = ttTest.SomeKey and patron.deleted = 0"). hQuery:QUERY-OPEN(). REPEAT WHILE hQuery:GET-NEXT(): DISPLAY patron.patronid. END.
BTW normally I do a DO WHILE, I used REPEAT just so the DISPLAY would show all the values instead of the last value.
Also, watch out for the QUOTER function. It is incomplete. You need to sanitize user inputs for things like tilde and backslash. Also be sure to use it for all values, not just character values.
I'd suggest reading the 10.2A programming handbook chapter on Dynamic queries first: documentation.progress.com/.../wwhelp.htm .
Why such an old version of the handbook ? Because one release later the chapter disappeared from the docs entirely and so far it hasn't returned yet.
>> What is the limit in the size of a query-prepare statement?
The limits to the query where clause are governed by the -inp & -tok parameters - if those are non-default at compile-time you may also need to provide them at runtime. That, and the QUERY-PREPARE() accepts a character type as input, so that'll have the 32k character limit.
>> Are there "gotchas" that would prevent the execution plan of QUERY-PREPARE from running the same way as a similar program that is compiled statically?
The ones I can think of:
- If you're replacing a FOR EACH loop with a query, make sure the query is set to be FORWARD-ONLY. That'll disable the query from using a result list, which in this scenario you won't need and will just waste resources/performance (FOR EACH doesn't use result-lists either and also only goes forward..)
- Dynamically prepared queries will select indexes at runtime. For static code, index selection is locked in at compile time. That can trigger differing behavior if you add/change indexes after the code is compiled. (But if you're compiling on the fly anyway, the chances of that happening are basically zero.)
- The new server-side joins in OpenEdge 12 currently are only supported by the FOR EACH loops. If you already upgraded, moving to dynamic queries means you'll have to give up the benefits from that for the time being. (Here's hoping we will soon see support for server-side joins for queries too !)
That link to the chapter about dynamic queries is very helpful. Thanks.
>> How do I introduce arguments to my query predicates?
What I'm wondering is how to use an argument/variable, and change it in a loop, and ensure that the query will respect it's value. Ie. if I have iCustNum and I have a query that I prepared ("FOR EACH customer WHERE CustNum = iCustNum") then the query should be prepared/compiled once, and executed repeatedly for different customers by changing the value of iCustNum.
But I haven't been able to accomplish anything along these lines. It does not seem possible to have a dynamic query that uses an argument/variable. The only way I've been able to get a different customer is to change the literals in the character string that is sent to QUERY-PREPARE. IE. you have to switch that foreach and instead of using iCustNum as an argument, you have to bake in the current *value* : "FOR EACH customer WHERE CustNum = " + STRING(iCustNum).
It seems unnatural to repeatedly compile a piece of code that isn't really changing. Only the argument to it is changing. In a sense it seems to me that in some regards the "dynamic" query is even less dynamic than a statically compiled program that references iCustNum from a locally scoped variable.
Am I missing something?
|
||||
Brian Maher
|
||||
Principal Engineer, Technical Support
|
||||
Progress
|
||||
14 Oak Park | Bedford, MA 01730 | USA
|
||||
|
||||
|
The one big gotcha of anything dynamic is tidying up after yourself. Anything you CREATE you should also DELETE at the end. Otherwise you will find memory leaks. The FINALLY block is useful here, but if you are building a query inside a loop of some kind, then you should also ensure you DELETE inside the loop.
By the way,Brian's recommendation of using SUBSTITUTE () is imperative as soon as you start using no character variables in your queries. If for some reason your data is unknown, concatenation will result in an unknown query string which gives horrible errors and little clue where it fell apart. At least with SUBSTITUTE () you will get a valid string, even if the content is invalid. You can then establish the cause of the issue.
Another point, you talk about dynamic queries being compiled on the fly. They are not. They are compiled at compile time. They are evaluated on the fly. This is quite different from compiling them on the fly.
The solution to your FOR EACH query is to literally build a new query for each iteration of the FOR EACH.
@Brian - thanks for he pointer to SUBSTITUTE. That is helpful. I remember seeing that but haven't used it.
@James - At least there is something that is internally compiled on the fly. The query string in the "QUERY-PREPARE" statement uses the ABL language, and that needs to be transformed into something that can execute on a CPU. That transformation is what I'm referring to. Here are the docs for QUERY-PREPARE:
Compiles a predicate (query condition).
It seems like it could be done once, and then swap out the value of the integer (iCustNum) on subsequent iterations.
As things work today, it sounds like there is no other solution than to rebuild an entirely new query for each time I change iCustNum. Let me know if that is not the case. The QUERY-PREPARE is not a "free" operation, it is actually quite expensive from what I can tell, especially in relation to the amount of schema that may need to be retrieved over client/server networking.
|
||||
Brian Maher
|
||||
Principal Engineer, Technical Support
|
||||
Progress
|
||||
14 Oak Park | Bedford, MA 01730 | USA
|
||||
|
||||
|
Addition to substitute being a new great friend, take a look at the quoter function. Can help a lot when injecting character parameters.
Addition to substitute being a new great friend, take a look at the quoter function. Can help a lot when injecting character parameters.
What I'm wondering is how to use an argument/variable, and change it in a loop, and ensure that the query will respect it's value. Ie. if I have iCustNum and I have a query that I prepared ("FOR EACH customer WHERE CustNum = iCustNum") then the query should be prepared/compiled once, and executed repeatedly for different customers by changing the value of iCustNum.
>> It seems unnatural to repeatedly compile a piece of code that isn't really changing.
> If it isn't changing then you may not need a dynamic query.
I should have said it isn't really changing "within the inner loop". The customer number is changing in a loop, but the syntax of the query is not.
I will concede that my example was overly trivial and doesn't prepare the original query in very a dynamic way. But lets just assume I need a dynamic query, despite the trivial example. Eg. lets say I need a dynamic query to be prepared, then I want to run it about 200 times. I want to run it based on a list of primary keys. I would like to be able to prepare the dynamic query and take a performance hit on QUERY-PREPARE a *single* time rather than 200 times. The other iterations are a variation on the query, but only insofar as they need to use a different value for iCustNum.
> Short answer is you can't.
That is what I feared. This is one place where the the "dynamic compile of source" approach may still be preferred, since the predicates in a dynamically compiled FOR-EACH can easily interact with all variables or parameters that are in scope. The QUERY-PREPARE approach is prone to having a ton of repetitive, compilation-related overhead, and that can be especially costly depending on the outer looping.
Remote compilation is a bigger penalty for customers that are migrating to client-server-PASOE. Running QUERY-PREPARE a hundred times is costly.
|
||||
Brian Maher
|
||||
Principal Engineer, Technical Support
|
||||
Progress
|
||||
14 Oak Park | Bedford, MA 01730 | USA
|
||||
|
||||
|
> The QUERY-PREPARE is not a "free" operation, it is actually quite expensive from what I can tell, especially in relation to the amount of schema that may need to be retrieved over client/server networking.
You only pay that price once. The AVM caches the schema for the field info for a given table the first time a query is prepared for that table.
> The QUERY-PREPARE is not a "free" operation, it is actually quite expensive from what I can tell, especially in relation to the amount of schema that may need to be retrieved over client/server networking.
You only pay that price once. The AVM caches the schema for the field info for a given table the first time a query is prepared for that table.
@tim What you describe would be helpful behavior; but it is not true based on my observations. There isn't any schema-caching from what I can tell. Even if you *explictly* specified a local schema cache file, it doesn't appear to be used for these purpose (dynamic queries).
A few months ago I opened a support case on this issue and they created a KB for it:
knowledgebase.progress.com/.../QUERY-PREPARE-generates-excessive-network-traffic
In my example I was using an expression that contained a bracketed predicate on CustNum over and over, WHERE (Customer.CustNum = 1) OR
(Customer.CustNum = 2) OR (Customer.CustNum = 3) OR
(Customer.CustNum = 4) OR (Customer.CustNum = 5) OR
(Customer.CustNum = 9) OR (Customer.CustNum = 10) ...
If you open wireshark durint the QUERY-PREPARE, you can see that during the operation the ABL is interrogating the database schema over and over again for the same CustNum field information. The ABL doesn't cache or reuse schema during the evaluation of a single statement, let alone reuse it for subsequent statements.
Here is the full repro if you would like to see this happening on the sports database:
Let me dig a little deeper. In my small test, it looked like there was no traffic after the first query prepare.
Let me dig a little deeper. In my small test, it looked like there was no traffic after the first query prepare.
I stand corrected! Thank you for the detailed info in the support case.
Thanks for the update Tim. I've been struggling with getting QUERY-PREPARE to perform well, especially when using lots of queries. I appreciate any tips and suggestions.
Another approach I've been evaluating is quite a lot more promising... we may start relying more on the SQL92 engine. This can be initiated right within the ABL code, and it can even be done within the *same* agent process, so long as PASOE is running on Windows. This can be accomplished by using the "CLR Bridge" to prepare an ADO.Net adapter with a relevant SELECT statement. Then the adapter method (Fill) will populate a the data. Since the resulting data is initially presented to us on the .Net side (not in ABL) then there is additional work to migrate the data back over to a ProDataSet again. (... which is a part of the work would not have been necessary if QUERY-PREPARE had been sufficient for our queries).
Migrating the data back from the CLR Bridge to a ProDataSet was a bit of a trick. If you aren't careful, then this movement of data can be very slow (despite the fact that it is simply moving data from one portion of process memory to another, within the same PASOE msagent). In fact, if you are really unlucky, then moving the data from one part of memory to another can be even *slower* than performing the Adapter.Fill operation in the first place - via the SQL92 engine.
Insofar as moving data back from the CLR Bridge to a ProDataSet ... I've had two discussions about moving data between the runtimes (where both runtimes are hosted in the same msagent process):
https://community.progress.com/community_groups/openedge_development/f/19/t/57994
https://community.progress.com/community_groups/openedge_development/f/19/p/58053/198401
The approach I'm currently settled on is to serialize the ADO.Net data into a JSON stream using newtonsoft . Then you just deserialize it again into my ABL prodataset using READ-JSON(). This happens entirely in memory without sending data out to the file system. It is a bit slow (about 100 ms per 10,000 records) but typically it doesn't exceed the amount of time it takes to run the SQL92 query in the first place. As an added bonus, I retrieve several different ROWID's columns for any of the foreign key references that may be of interest. (The rowid's are serialized as 16 digit hex strings and can be used for gathering even more additional data via TO-ROWID()).
This current approach involves several moving parts - but the end result is worth it. The SQL92 engine offers a lot more than you get from QUERY-PREPARE. In fact, with SQL92 I already have my "server-side joins" and my "multi-threaded server" features! This is despite the fact that we are still running OE 11.7 (Progress is making these things available to normal ABL programs in OE version 12).
Since there are some moving parts, it is important to build a bit of abstraction over the top of this (using OOABL). With a good abstraction layer, the syntax for using the SQL92 engine can appear just as straight-forward as using QUERY-PREPARE.
After the initial SQL92 query is made, and after a TT is filled with our data, then the last step is to chase down any interesting foreign-key records that you might still want. If you had already prepared for this in advance, then you will have the 16 digit hex ROWIDs for foreign-key records. You can either go get these one at a time (using WHERE ROWID(customer) = TO-ROWID(query_output.customer_rowid_hex)) or you can build a dynamic query to get them in batches (100 rowids at a time). This final step is actually a suitable problem for the use of QUERY-PREPARE. It is worthwhile to retrieve the foreign-key records in batches if there are 100's or 1000's of them, and if you are retrieving them via "client/server" (the PASOE instance is remote from the database).
Sorry for this long post, but I thought it was important to share it here, since I've gotten so much help while working on this approach. Migrating our "shared memory" code to a remote PASOE instance has been challenging. Of all the challenges we've faced while migrating to PASOE, the performance of "client-server queries" has been the most challenging (more challenging than everything else combined). We've already spent a year migrating to PASOE from our "classic" appservers that were running with "shared memory" connections. And on top of that, we're simultaneously migrating our ABL from the HP-UX platform to the Windows platform. Moving to the Windows platform makes it possible to use the CLR Bridge for our SQL92 queries. When we were on HP-UX it would have involved even more complexity to make use of SQL92 from ABL code. A lot of the work would probably have been done "out-of-process" (probably in some scary shell scripts or something like that.)
> On Jun 7, 2019, at 12:03 PM, James Palmer wrote:
>
> Another point, you talk about dynamic queries being compiled on the fly. They are not. They are compiled at compile time. They are evaluated on the fly. This is quite different from compiling them on the fly.
Well ... almost. Dynamic queries a compiled when you do QUERY-PREPARE.
>> instead of using iCustNum as an argument, you have to bake in the current *value* : "FOR EACH customer WHERE CustNum = " + STRING(iCustNum).
But this is repeatedly changing ... in a very important way. While it may seem like a trivial change, there is an index on the customer number and when you change customer number in the where clause, that changes the index equality bracketing (or range backeting) that must be used for the query. Doing that requires re-analysing the query so it has to be prepared again.
Of course, lareger changes in the where clause will have even bigger effects.
I haven't seen this suggested yet, though I may have missed it:
Put your 200 primary keys into a temp-table and add that to the dynamic query.
DEFINE TEMP-TABLE ttTest NO-UNDO FIELD SomeKey AS CHARACTER. CREATE ttTest. ASSIGN ttTest.SomeKey = "xxx". CREATE ttTest. ASSIGN ttTest.SomeKey = "yyy". DEFINE VARIABLE hQuery AS HANDLE NO-UNDO. CREATE QUERY hQuery. hQuery:ADD-BUFFER(BUFFER ttTest:HANDLE). hQuery:ADD-BUFFER(BUFFER patron:HANDLE). hQuery:QUERY-PREPARE("for each ttTest, each patron no-lock where patron.patronid = ttTest.SomeKey and patron.deleted = 0"). hQuery:QUERY-OPEN(). REPEAT WHILE hQuery:GET-NEXT(): DISPLAY patron.patronid. END.
BTW normally I do a DO WHILE, I used REPEAT just so the DISPLAY would show all the values instead of the last value.
Also, watch out for the QUOTER function. It is incomplete. You need to sanitize user inputs for things like tilde and backslash. Also be sure to use it for all values, not just character values.
@Jon That is helpful. I wonder why the dynamic queries will allow us to use the temp table from the local session, but not the local memory variables. It seems to be a bit of an inconsistency.
I like the fact that, if you can structure the query this way, then this will allow us to do the QUERY-PREPARE (the compilation) just one time and reuse the compiled query. I assume I can even close and reopen the query without repeating the QUERY-PREPARE operation. As you point out, it looks like I can use the ttTest to hold any necessary arguments for the rest of the query. It would still be nice if parameters could be sent to the query without having to go thru an intermediate temp table.
You can always reference field values from the buffer that you are querying on in a dynamic query. The reason we cannot allow variables is that we do not have the same compiler context when we compile the query string for QUERY-PREPARE. I.e., We don't have any of the information that we get from compiling everything in the .p/.cls before this point. So we don't actually know what foo is if you are trying to compare to a variable called foo. Once we get to the run-time, we no longer know what the names of the variables are (yes, that is true!). We only know how to find their values (e.g., it is the 3rd field in the local variable buffer). That is the information that is saved in the r-code. So if you reference "foo", we won't know how to resolve that.
You can always reference field values from the buffer that you are querying on in a dynamic query. The reason we cannot allow variables is that we do not have the same compiler context when we compile the query string for QUERY-PREPARE. I.e., We don't have any of the information that we get from compiling everything in the .p/.cls before this point. So we don't actually know what foo is if you are trying to compare to a variable called foo. Once we get to the run-time, we no longer know what the names of the variables are (yes, that is true!). We only know how to find their values (e.g., it is the 3rd field in the local variable buffer). That is the information that is saved in the r-code. So if you reference "foo", we won't know how to resolve that.
Hi Laura
I understand your reasoning, but would the compiler not be able to compile that reference into the query?
for each relation where relation.name = foo
would become
for each relation where relation.name = 3th field in the local variable buffer
Or am I missing something?
We can't evaluate the query string for QUERY-PREPARE until run-time. The value there is not a constant, it is a character expression. i.e., It can be the value of a variable or the return value of a function, or the result of a SUBSTRING function on a variable or a database field, etc, etc. Therefore, during the compilation of the .p/.cls, the compiler stores information in the r-code that allows us to evaluate the expression. But it knows nothing about what the expression's value is going to be. So yes, in the example above, which is a FOR EACH statement, the compiler would essentially store what you show (3rd field in the local var buffer). But for a QUERY-PREPARE all you have is a string whose value is: "WHERE relation.name = foo". Maybe that really came from a line of code that said: "qry:QUERY-PREPARE(whereClause)" where whereClause is a variable, and at run-time it evaluates to "WHERE relation.name = foo". Now the compiler has to evaluate this, and now it does not know what foo is.
Hi
I think there is a workaround to avoid multiple Query-prepare.
Times came from a breakout session did at Noordwick in october 2016.
Elapse time for a "query-prepare" : 0.000990 sec
Elapse time for a "workaround" : 0.000044 sec
Explanation
With Multiple Query Prepare :
the-ShipTo = "ShipTo" .
CREATE BUFFER bh-ShipTo FOR TABLE the-ShipTo NO-ERROR .
the-Invoice = "Invoice".
CREATE BUFFER bh-Invoice FOR TABLE the-Invoice NO-ERROR .
the-State = "State" .
CREATE BUFFER bh-State FOR TABLE the-State NO-ERROR .
EACH LOOP will be :
CREATE QUERY hQry.
hQry:SET-BUFFERS(bh-ShipTo, bh-State).
qprepare = "FOR EACH ShipTo WHERE ShipTo.CustNum = " + string( I-custnum) + " NO-LOCK , FIRST State WHERE State.State = ShipTo.State NO-LOCK " .
logbid = hQry:QUERY-PREPARE(qprepare) NO-ERROR.
IF logbid AND hQry:QUERY-OPEN THEN
hQry:GET-FIRST(NO-LOCK).
DO WHILE NOT hQry:QUERY-OFF-END :
....
The Workaround :
You prepare one time your query
the-ShipTo = "ShipTo" .
CREATE BUFFER bh-ShipTo FOR TABLE the-ShipTo NO-ERROR .
the-Invoice = "Invoice".
CREATE BUFFER bh-Invoice FOR TABLE the-Invoice NO-ERROR .
the-State = "State" .
CREATE BUFFER bh-State FOR TABLE the-State NO-ERROR .
CREATE QUERY hQry.
hQry:SET-BUFFERS(BUFFER ttparameter:HANDLE, bh-ShipTo, bh-State).
qprepare = "FOR Each ttparameter no-lock , EACH ShipTo WHERE ShipTo.CustNum = ttparameter.ParamInt1 NO-LOCK , FIRST State WHERE State.State = ShipTo.State NO-LOCK " .
logbid = hQry:QUERY-PREPARE(qprepare) .
The loop will be :
(one update of the record of the temp-table, no more Query-prepare)
FIND FIRST ttparameter NO-LOCK NO-ERROR .
IF NOT AVAIL ttparameter THEN DO :
CREATE ttparameter .
END.
ASSIGN ttparameter.ParamInt1 = I-custnum .
VALIDATE ttparameter.
IF hQry:QUERY-OPEN THEN
hQry:GET-FIRST(NO-LOCK).
DO WHILE NOT hQry:QUERY-OFF-END :
Thank you for that very thorough example. This was not an easy technique to discover, and many people probably wouldn't find it on their own. The technique should probably be mentioned in the QUERY-PREPARE documentation. It is not obvious that there is a special way to get local session variables into our query predicates without relying on any "hard-coded" literals. Concatenating the literals into our dynamic queries seems very primitive, and I feel dirty every time I do it. ;)
Nowadays with PASOE running on a different machine (outside of "shared memory"), the incremental penalty for multiple "query prepare" is probably even higher than what you described. Whenever the compiler is performing the query prepare operation, it doesn't seem willing or able to reuse schema from the prior iteration of the loop, so there is lots of repetitive chatter with the remote database, simply to gather the required schema. In fact it can be more expensive than the final execution of the "query" itself.
Your example is similar to Jon Brock's above. One thing I was trying to say before is that there seems to be an inconsistency in the fact that we can can use local TT records as parameters to the dynamic query but *not* the memory variables themselves. It seems to me that *both* of these things should be accessible to the query. I can see why there may be technical challenges (per Laura's comments). But there should also be technical solutions. For example, the compiler could give us some free "syntactic sugar" in the place of the TT parameter record. The compiler might generate a "hidden" TT for us using all of the local variables in scope, and then it would effectively behave like there was a single outer loop on that single TT record before processing the rest of the query (thereby bringing all of the local memory variables into the scope of the dynamic query as well.) I'm not an expert on it by any means, but some compilers do fancy things to bring memory variables into the scope of a totally different function (called a closure) and it seems like an ABL compiler should be capable of bringing memory variables into the scope of QUERY-PREPARE as well.
Or maybe as a less elaborate solution, the QUERY-PREPARE just needs another set of parameters where we can explicitly tell it what local memory variables should be pulled into the scope of the query preparation. It would define the corresponding/equivalent memory variables in the new execution context, and transfer values whenever the query is opened.
Hi
The difference between Jon Brock's example and my example is that you have not to know the value of the parameter before the Query-prepare.
We evaluate the parameters on the fly, on the example below our start point is the result of a for each on customer, order and orderline, depending the result of the procedure “What-case-is-it_return_A-B-C”, I will run a different query on “different” tables with different “parameter” .
So I have 3 different types of dynamic queries with different values.
In the example below I will prepare only one time (if needed) each type of query even if I run then a several times .
FOR EACH customer NO-LOCK ,
EACH order NO-LOCK
WHERE order.custnum = customer.custnum ,
EACH orderline NO-LOCK ,
WHERE orderline.ordernum = order.ordernum :
Run What-case-is-it_return_A-B-C (OUTPUT V-MY-CASE ,
OUTPUT v-param-1 ,
OUTPUT v-param-2 ) .
CASE V-MY-CASE :
WHEN "A" THEN DO:
IF v-CASE-A-Prepare = FALSE THEN DO :
the-ShipTo-Case-A = "ShipTo" .
CREATE BUFFER bh-ShipTo-Case-A FOR TABLE the-ShipTo NO-ERROR .
the-State-Case-A = "State" .
CREATE BUFFER bh-State-Case-A FOR TABLE the-State NO-ERROR .
CREATE QUERY hQry-Case-A.
hQry-Case-A:SET-BUFFERS(BUFFER ttparameter-Case-A:HANDLE, bh-ShipTo-Case-A, bh-State-Case-A).
qprepare-Case-A = "FOR Each ttparameter-Case-A no-lock , " +
" EACH ShipTo WHERE ShipTo.CustNum = ttparameter-Case-A.ParamInt1 NO-LOCK , " +
" FIRST State WHERE State.State = ShipTo.State NO-LOCK " .
logbid = hQry-Case-A:QUERY-PREPARE(qprepare) .
v-CASE-A-Prepare = TRUE .
EMPTY TEMP-TABLE ttparameter-Case-A .
CREATE ttparameter-case-A .
END.
FIND FIRST ttparameter-Case-A NO-LOCK NO-ERROR .
ASSIGN ttparameter-Case-A.ParamInt1 = v-param-1 .
VALIDATE ttparameter-Case-A.
IF hQry-Case-A:QUERY-OPEN THEN
hQry-Case-A:GET-FIRST(NO-LOCK).
DO WHILE NOT hQry-Case-A:QUERY-OFF-END :
/* Do what you want*/
END.
END.
WHEN "B" THEN DO:
IF v-case-B-Prepare = FALSE THEN DO :
the-Invoice-case-B = "Invoice".
CREATE BUFFER bh-Invoice-case-B FOR TABLE the-Invoice NO-ERROR .
CREATE QUERY hQry-case-B.
hQry-case-B:SET-BUFFERS(BUFFER ttparameter-case-B:HANDLE,
bh-Invoice-case-B ).
qprepare-case-B = "FOR Each ttparameter-case-B no-lock , " +
" FIRST Invoice WHERE Invoice.ordernum = ttparameter-case-B.ParamInt1 And Invoice.InvoiceDate >= ttparameter-case-B.ParamInt2 NO-LOCK " .
logbid = hQry-case-B:QUERY-PREPARE(qprepare) .
v-case-B-Prepare = TRUE .
EMPTY TEMP-TABLE ttparameter-case-B .
CREATE ttparameter-case-B .
END.
FIND FIRST ttparameter-case-B NO-LOCK NO-ERROR .
ASSIGN ttparameter-case-B.ParamInt1 = STRING(order.ordernum)
ttparameter-case-B.ParamInt2 = v-param-2 .
VALIDATE ttparameter-case-B.
IF hQry-case-B:QUERY-OPEN THEN
hQry-case-B:GET-FIRST(NO-LOCK).
DO WHILE NOT hQry-case-B:QUERY-OFF-END :
/* Do what you want*/
END.
END.
WHEN "C" THEN DO:
IF v-case-C-Prepare = FALSE THEN DO :
the-ShipTo-case-C = "ShipTo" .
CREATE BUFFER bh-ShipTo-case-C FOR TABLE the-ShipTo NO-ERROR .
the-Invoice-case-C = "Invoice".
CREATE BUFFER bh-Invoice-case-C FOR TABLE the-Invoice NO-ERROR .
the-State-case-C = "State" .
CREATE BUFFER bh-State-case-C FOR TABLE the-State NO-ERROR .
CREATE QUERY hQry-case-C.
hQry-case-C:SET-BUFFERS(BUFFER ttparameter-case-C:HANDLE,
bh-ShipTo-case-C,
bh-State-case-C ,
bh-Invoice-case-C).
qprepare-case-C = "FOR Each ttparameter-case-C no-lock , " +
" EACH ShipTo WHERE ShipTo.CustNum = ttparameter-case-C.ParamInt1 And ShipTo.name Matches = ttparameter-case-C.ParamInt2 NO-LOCK , " +
" FIRST State WHERE State.State = ShipTo.State NO-LOCK , " +
" FIRST Invoice WHERE Invoice.ordernum = ttparameter-case-C.ParamInt3 NO-LOCK , " +
.
logbid = hQry-case-C:QUERY-PREPARE(qprepare) .
v-case-C-Prepare = TRUE .
EMPTY TEMP-TABLE ttparameter-case-C .
CREATE ttparameter-case-C .
END.
FIND FIRST ttparameter-case-C NO-LOCK NO-ERROR .
ASSIGN ttparameter-case-C.ParamInt1 = v-param-1
ttparameter-case-C.ParamInt2 = v-param-2
ttparameter-case-C.ParamInt3 = STRING(order.ordernum)
.
VALIDATE ttparameter-case-C.
IF hQry-case-C:QUERY-OPEN THEN
hQry-case-C:GET-FIRST(NO-LOCK).
DO WHILE NOT hQry-case-C:QUERY-OFF-END :
/* Do what you want*/
END.
END.
END CASE.
END.
DELETE/DESTROY WHAT IS NEEDED
>> difference between Jon Brock's example and my example is that you have not to know the value of the parameter before the Query-prepare
True. Although once Jon started using a temp-table, it seemed to imply that the query was re-usable too (with new parameter values). It would be a cruel thing if the query could *not* be used again, despite the referencing of the temp-table. In my follow-up post I pointed out that you ... "can even close and reopen the query without repeating the QUERY-PREPARE operation".
Thanks for the additional sample code. This is helpful. It is too bad the docs don't make it clear that this is the (only?) way to create parameterized dynamic queries that don't need to be constantly recompiled. Many scenarios for using "dynamic queries" would perform much better if they didn't need to be recompiled whenever the parameter data is changing.