Where are details found about the use of the -T folder files lbia, rcda, srta, dbi files? At this time srta is in the spotlight. I want a deeper understanding of what affects srta and what startup values will have an impact. A curious observation is when a program is running we see the srta file is 0 kb and it stays that way. Using Windows Resource monitor, Disk tab we see write B/sec to srta file. We have plenty of RAM to use so if we can switch that I/O activity to RAM with a start up parameter that would be great. Windows server 2012 r2, OpenEdge 11.5.1, client networking.
Where are details found about the use of the -T folder files lbia, rcda, srta, dbi files?
At this time srta is in the spotlight. I want a deeper understanding of what affects srta and what startup values will have an impact.
A curious observation is when a program is running we see the srta file is 0 kb and it stays that way. Using Windows Resource monitor, Disk tab we see write B/sec to srta file. We have plenty of RAM to use so if we can switch that I/O activity to RAM with a start up parameter that would be great.
Windows server 2012 r2, OpenEdge 11.5.1, client networking.
srt files are "sort files". the primary usage is to filter and sort data that the server wasn't able to handle -- like when your query doesn't have good bracketing or your BY phrase isn't supported by an index. There are no parameters to control this -- you need to write better queries.
dbi files are overflow for temp-tables. If the data in your temp-tables no longer fits in memory (controlled by -Bt) then the excess will end up in the DBI file.
lbi is "local before image". This is for UNDO variables and work files. There are no parameters to tweak related to lbi files.
rcd files are r-code that does not fit within the -mmax buffer. You can increase -mmax or use memory mapped shared libraries (this stuff used to hide in the srt file but it got moved to rcd quite a while ago).
rpf files are intermediate profiler files, then can very quickly become *huge*. Really **huge**. Gigabytes in hardly any time at all.
When the profiler ends normally (no errors) or the "write" method is called these are converted to prf files which are much smaller.
> On May 19, 2017, at 4:41 PM, ChUIMonster wrote:
> There are no parameters to control this -- you need to write better queries.
while it’s true you should write better queries, there are a couple of the things you can do
0) set the -TN and -TB client startup parameters to their maximum values and see of that helps
1) compile with XREF and see if uour queries are using the indexes you think they should
2) add some indexes to suit the queries you are running. that might have undesireable consequences as creates and deletes will take slightly longer.=
Thank you for the replies. I expected there was a way, much like with -Bt, to allocate memory for the srt processes. My hope was that I just could not find it and you could point me to the right parameter. I will continue to push back for better queries. In addition to addressing a specific case I want to make sure what the symptoms are telling me. Correct me if this is not correct. All things being the same with the DB, data, and code 2 user sessions will get consistent different response time if the write speed to the srt file is slower on one.
Good point, I forgot about -TB and -TM
Additional diagnostics -- use ProTop's table and index info screens to determine which tables and indexes are actually being used at run-time and how much activity there is. The results may surprise you and point the way towards improvements.
I ran a test of a specific procedure with 2 sessions on the same box. The only difference is the TM and TB values.
The session with TM and TB at max values was slower than the session with default values.
It would be nice to be able to use more memory for these processes.
Additionally, as I review this entire post I realize a point I did not make. The point of view on this case is from folks on the network and hardware team. The write activity to the srta files caught attention. They are asking what they can do to make things better and faster. Also they are asking what is going on under the covers. That is why I expanded the question as seen above.
Thank you for input on this topic.
SRT file activity is evidence that you have coding issue that need to be addressed. But while you wait for the programers to track down and fix those issues two possibilities for treating the symptoms come to mind:
1) If the data is NO-LOCK reads then the new -prefecth* family of parameters might help to optimize network traffic along with -Mm 8192 and implementing jumbo frames.
2) If the -T directory is on a SAN move it to internal SSD.
Thank you again. I see it really helps to explain the question better.
I expect the hardware folks to argue that the box has much free ram why isn’t that being used? I take it the answer is, that is the way it works.
> On May 23, 2017, at 12:52 PM, bremmeyr wrote:
> box has much free ram why isn’t that being used
because the client’s sorting algorithm can use only a limited amount of memory. This being so since the dark days of DOS and 640 KB.
i see you got worse results from raising -TB and -TM. Try reducing TM by half and then 3/4 ths while keeping TB at max.=
Thank you Gus. Will do as soon as I can.