[View:/cfs-file/__key/communityserver-discussions-components-files/19/QueryTest1.p:320:240] I have done some query testing. The physical and Dynamical quiries are retrieving 100.000 records for two tables like for each table1, each table 2 where.
In the table below you find the avarage time in ms. Can somebody explain why the queries are dramatically slow when using rereadnolock and why they are slow when not using rereadnolock and not using the CACHE option?
|
Dynamic |
Physical |
Reread with no cache |
99.000 |
58.000 |
Reread with cache |
9.000 |
9.000 |
No reread with no cache |
9.000 |
55.000 |
No reread with cache |
9.000 |
9.000 |
HI Brian,
I have added QueryTest1.p. I forgot to mention that the test was done with an OpenEdge 10.2B SP8 on Windows.
And I am busy to do the test on Linux.
With variable iNumSeq in the code, you can change the number of itterations. Per iteration you can set the iMaxCount (number of records to read). What I did was reading 15 x 100.000 records.
I did the test on my computer and on the server. I did not use shared memory or single user (production environment).
-B 5220000 (8 KB blocksize)
Total memory is 256 GB
Available 116 GB (small database).
With Protop I did not see any OS-reads. We also using -lrkuskip 100.
We have a lot of other servers with bigger databases and different configurations. I can do this test also in those environments.
BTW with physical I mean Static query. And with 'no cache' I mean that I did not use the CACHE in the define query statement. I could also try CACHE 0.
I have also tested this a couple of years ago and got similar results. In many cases, it seems to be quite important to use CACHE > 0 when using -rereadnolock. Here's one interesting kb article: