DEFINE VARIABLE n AS INTEGER NO-UNDO INITIAL 1000000. DEFINE VARIABLE i AS INTEGER NO-UNDO. ETIME(TRUE). DO i = 1 TO n: END. MESSAGE ETIME VIEW-AS ALERT-BOX INFO BUTTONS OK.
should the loop duration be defined mainly by CPU frequency?
Results do depend from 32-bit vs 64-bit versions of Progress but only slightly .
The higher frequency the faster code?
Should the loop duration multiplied CPU frequency be a constant?
The tests do not confirm this assumption:
Time MHz Time*KHz Host Progress 2.334 1165 2.72 host5 10.2B 2.494 1165 2.91 host5 11.6 0.158 2395 0.38 host3 10.2B 0.149 2395 0.36 host3 11.6 0.166 3400 0.56 host2 10.2B 0.158 3400 0.54 host2 11.1 0.150 3400 0.51 host4 11.6 0.219 3425 0.75 host1 11.6
CPU frequency in the table is what is reported by OS.
What are the factors I missed?
What about context switching? The busier the CPU, the smaller the quantum may be. I'm not sure how that affects etime but I'm guessing that it probably does.
The boxes were not busy during the tests. BTW, the boxes are the mix of the "big boys" (the enterprise servers) and PCs (including my notebook). The results were consistent when the test was re-run on the same box (unlike, I guess, the results of UEFA Euro ;-).
Modern CPUs are too smart.
Back i the MASM 5.1 days , running on a 386 I've used a NOP loop to measure CPU frequency. It was "pretty accurate" running on 386 and 486 CPUs. Once Pentiums came out, the test went nuts .
Since you are into execution speed testing territory, if you are running it on hardware with power management (ie laptop running windows) try your test using the High performance power plan. Depending on the actual hardware the CPU doesn't always get a chance to "rev up" to full speed.
High performance power plan did not change an execution time. I do not why CPUs can be called "smart". An execution time is growing in direct proportion to the number of loops. Execution is slowing down only by 3% when I run two sessions simultaneously on my laptop (4 logical cores) and is not changing at all on Unix server with 96 CPUs.
I has compared Progress DO loop and the similar Unix loop:
n=10000 time while [ $n -gt 0 ] do n=`expr $n - 1` done
Unix loop is much slower but the ratio of their execution times does not depend from CPU frequency and it's the same on my laptop (2.40GHz) and on old Unix box with slow CPUs (1165 MHz). CPU frequency is twice lower on Unix box than on laptop but absolute value of execution times are 10 times lower. In other words CPU frequency tells nothing about how fast the box really is.
I thinks we can measure CPU frequency in the number of Progress DO loops.
For example, CPUs on my laptop (4*2.40GHz) can do 6 mega DO loops per sec (= mDO/sec ;-).
Modern AIX server (32*3425 MHz) - 4.57 mDO/sec.
Old SUN server (96*1165 MHz) - only 0.4 mDO/sec.
some other factors that will affect the result are memory speed, memory interleaving or not, memory bus speed, bus contention, cpu contention, cpu cache sizes, no of cache levels, memory foot print of the loop.
george, does expr $n -1 create subprocess on your system?
fyi, in bash, you can do n=$(( $n - 1 )) which is perhaps quicker.
> some other factors that will affect the result
In the limited tests that I was able to run these factors seemed to affect the absolutely different operations in equal degree - from the parsing db logs to the latch lock durations. To predict the time of these operations it turned out to be enough to compare the speed of the DO loop running on two boxes. Accuracy of the prediction is 10-20% but it's enough for my tasks.
> does expr $n -1 create subprocess on your system?
No. After the 'while do' loop 'echo $n' shows the zero value. If the 'while do' loop would create a subprocess then I would see the value assigned before the loop.
> fyi, in bash, you can do n=$(( $n - 1 )) which is perhaps quicker.
It's indeed 100 times faster than 'expr'.