I am running progress code on broker (running on AIX machine) . Using progress 10.2B.
when code gets executed I noticed that there is lot of time taken just to call .p file or persistent procedure. Time taken is around 8 to 40 seconds.
I have few questions regarding same.
1. Is this related to memory leak. I have tried to avoid any memory leak by using single copy of persistent procedure, deleting unused procedures etc.
2. Do I need to increase stack size (-s parameter) , as I am passing lot of information through temp-tables to these procedures or .p files ?
But I did not see any stack overflow error till now in logs. Request processing getting completed without any error but its taking time.
3. Do I need to increase heap size (-jvmargs parameter) ? But I did not notice any hep size error also.
4. Does this is related to resources available on server (memory, CPU etc) ? Do I need to re-configure those things on server?
If anyone has faced same situation, Your findings or solutions are much appreciated.
Thanks in advance.
Call times like this are not a result of the AVM - the code itself is doing something that's taking all that time.
@Tim: I checked adding logs at start of .p or procedure... measured time from calling .p to that log I added using 4GL-Trace. There is no code to execute except line which writes log. I guess its the calling time only.
What is your propath setting?
Are you using -q parameter?
Some other options then:
Nilesh,
>> 2. Do I need to increase stack size (-s parameter) , as I am passing lot of information through temp-tables to these
>> procedures or .p files ?
This is the most likely cause of the problem. How are you passing these temp-tables? If you are simply passing the temp-table from one program to another as a parameter then the AVM copies the data. If there is a lot of data in each temp-table then that will take time to copy. I would recommend passing the temp-table handle instead.
Brian
I would recommend passing the temp-table handle instead.
Or pass the TTs BY-REFERENCE:
RUN x.p (TABLE ttTest BY-REFERENCE).
Thanks Tim! I will surely try it and will post my observations.
@Tim: It worked well for me. Initially same call was taking around 35 sec, now its taking around 200 ms.
Excellent.
@Tim: Now we are facing some other issue over here. We are testing same code for performance testing phase with 1K concurrent users, (where number of users and request is increasing per second till number of user reaches 1K). Initially this call takes around 200 ms, but same thing increases to 4-5 seconds at full load, I have passed all temp-tables as reference only, which helped me a lot, but still I need to reduce time taken to call a procedure under heavy load. Any comments would be appreciated.
Swalpa - this would require a more in-depth look at what you're doing, and not something I can do from here. Please drop me a line at tim.kuehn@tdkcs.com and we can discuss your specifics.
If the procedure is fast single threaded and you are seeing slowness under heavy user load, it sounds like it is the AppServer that needs tuning ... or, perhaps you simply have too much load for the resources.
@Thomas: Yes procedure is single threaded, but we are not seeing any load on CPU or memory of CPU. Seems app server needs tuning, but I am not sure which parameters need to be adjusted for this tuning. We have already increases -s (stack size) parameter, but it did not have much improvement.
I don't think stack size (-s) is a performance impacting parameter.
How many AppServer agents are available? If there are too many concurrent request to be served by all (available) agents, they might get queued at the broker.
My problem solved, I increased value of -Bt parameter which specifies number of buffers for temp-tables. Now calls are executed in 50 ms even under heavy load. This was problem with TT's only. Passing those by-ref helped me and then increasing -Bt made it more fast.
Excellent! Congrats on that!