I'd like to start benefitting from the performance characteristics of the "-q" option - even in development. For most of our programming work, the OE/ABL stuff we create is exposed only via appservers (state-free and some state-reset).
The "-q" option is also known as the quick request startup parameter. It is one of the most important parameters for better performance of the ABL code itself. (http://knowledgebase.progress.com/articles/Article/P12659 )
It is important to start using the "-q" behavior in development in order to (1) have faster and more efficient code/test/repeat cycles, and (2) avoid additional work to find and resolve performance issues after the fact - ie after incorrectly assuming that performance issues are related to the missing "-q" option.
And (3) there is nothing so troublesome (as we experience today) as running some external .r programs in a tight loop of a thousand iterations - only to realize that all the file-stat operations on the r-code files are the biggest bottleneck by a factor of 10, adding several seconds to something that should take milliseconds. This is especially true when r-code is hosted on network-attached storage.
Is there a way to use "-q" in development, while still allowing myself a way to flush out my r-code as a critical step in the code/test/repeat cycle? I found this weird hack. (Flush Hack) http://knowledgebase.progress.com/articles/Article/P130577 ... Does anyone have experience with it? Can I use this reliably in a state-free agent when a client application is starting up? If this only flushes cached programs in a single agent, how do I get the other state-free agents to be flushed? IE. Can the agent that flushed itself with the "Flush Hack" also shell out to the OS and use "asbman -trimservers" in order to take care of the other agents?
Along the same lines of "asbman", Another option I'm considering is just setting appserver broker properties so that the "-q" is somewhat balanced out by an "auto-flush" of r-code. Eg. I might set minSrvrInstance to zero and autoTrimTimeout to 10 seconds so that after a ten second period of inactivity, the appserver agents will be trimmed, thereby picking up new r-code when the agents are restarted. It is rare that my code/test/repeat cycles are faster than 10 seconds so I can see this being a potential solution.
It seems to me that someone must have solved all these problems before. Please let me know if anyone has a good solution that allows them to benefit from the "quick request" startup parameter in development.
What are you trying to accomplish in development that would require a flush of r-code? Are you using PDSOE? Are you using ABL unit testing or something like PCT?
A lot of these tools will start a new AVM each time you want to run/debug/test something. PDSOE will trim appserver automatically each time you publish code to the appserver/webspeed. If you're not using PDSOE, you can simply run asbman -trim 1000 to trim everything which would remove everything from memory.
PCT and the ABL unit testing frameworks would be restarting the client on each run. Can you setup these to trim appsever agents before each test execution.
I do use PDSOE and I compile directly into the same directories used by the OE appserver agents.
I only work in ABL half the time, and that ABL code is executed in the back-end tier of a two-tier app that has a Windows.NET front end (Visual Studio .Net).
Since I spend more time in .Net, I don't need my OE/ABL stuff (round-trips to appserver) to be running sluggishly ( without using "-q" -- as if my r-code would potentially change at any moment). But I still want to be able to occasionally make an ABL change, and even compile it when necessary for performance purposes. This may happen, say, only once for every ten times that my .Net code is changed and recompiled and tested.
I'm now using the approach where I change minSrvrInstance to zero and autoTrimTimeout to 10 seconds so that after a ten second period of inactivity, the appserver agents will be trimmed, thereby picking up new r-code when the agents are restarted.
This seems to work pretty well. As expected, the approach does add an extra second or so whenever my .Net application re-launches and connects for the first time (after the 10-sec-app-server-auto-trim). Despite the extra second delay, I think I still like this approach (better than the "flush hack" which is really only a good solution for a single ABL session).
... And one day if I don't want my agents auto-trimming at all (eg. when my *entire* day is going to be spent on the VS.Net side of things), then I'll just create a keep-alive utility that sends some bogus requests to those appserver agents every five seconds in order to make sure they are always "warm" and ready for round-trips.