on some Threads regarding I/O-Performance comes reference to the "Furgal Test".
Is there some Information how to run this test.
maybe some guidelines about good/bad performance values?
Create a sports2000 DB.
proutil sports2000 -C truncate bi -bi 16384
proutil sports2000 -C bigrow 2 (-zextendSyncIO)
The bracketed bit is for *nix I believe.
The test shows how quickly the database files can be written to and extended. You want it to be very quick. Anything over 10 seconds is indicative of potential disk issues to investigate.
> proutil sports2000 -C bigrow 2 (-zextendSyncIO)
Don't include the brackets.
The point of this is to test how the system does with synchronous I/O. This is important as it gives an indication of the disk throughput you can expect for the BI and AI files.
After the first command, the BI file is truncated and the BI cluster size is set to 16 MB. The second command opens the database, causing the BI file to be grown to its initial size of 4 BI clusters, and then the "bigrow 2" directive causes another 2 clusters to be allocated, for a total of 96 MB of writes. Dividing 96 by the test's elapsed time in seconds gives the throughput in MB/second.
As of OE 11.3 and later, Progress made this command function more efficiently by doing asynchronous I/O and then flushing buffers at the end, but that had the side-effect of impacting this test as the "bigrow" command no longer does the same type of writes it used to. The undocumented "-zextendSyncIO" switch reverts the command back to its pre-11.3 behaviour, though as I understand it this switch has no effect on certain platforms like Linux and Windows as the command already behaves the same after 11.3 as before, due to limitations in those OSes.
The "Furgal" in question is Mike Furgal of Progress. He posts here; if you search you'll find posts from Mike and others talking about this test.
Running more than one bigrow in parallel can give the interesting results as well.
when i was with the Progress MDBA group, we saw a strong correlation between poor (longer than 10 sec) bigrow test results and poor database/application performance. the bigrow test is a /clue/ that something may be wrong. it is not a terribly strenuous I/O test and the correlation is a fact.
but: it is not 100.000 % correct every time. this means that:
0) your performance might be just fine, particularly if the workload is not too strenuous.
1) if you are having a problem, further investigation will be required.
2) often, management and storage weenies steadfastly refuse to believe the bigrow test. they say, no the test is badly written. well, it is not.