Latest ATM Benchmark Tool

Posted by Paul Koufalis on 26-Oct-2016 10:01

Where can I find the latest version of the ATM benchmark?

The latest ATM I could find was here on community: ATM 5.1 from one of the Secret Bunker Test posts:

https://community.progress.com/community_groups/openedge_rdbms/w/openedgerdbms/793.secret-bunker-nr-6-atm-benchmark-kit

I'm sure this will be plenty good enough so this is more a general question to know if there is a semi-official home for the ATM tool.

All Replies

Posted by gus bjorklund on 29-Oct-2016 09:34

ATM 5.1 is the most recent version I have released.

Its official home is on my laptop.

I am working on a new version that has optional table partitioning and easier to use scripts.

It will be done Real Soon Now ( tm )

Posted by George Potemkin on 29-Oct-2016 10:04

Would it be useful to have a common script to run any Progress tests like ATM and readprobe? In both cases there is a test driver procedure that launches the sessions running some procedures: atm[1-or-4].p or readalot.p. Both test drivers have the elements of the built-in performance monitoring. I has wrote a script/driver procedure that enhances the readprobe test - mainly by enhancing its performance monitoring. I did not yet check if it can be used to run atm*.p. Are there any showstoppers?

Posted by ChUIMonster on 30-Oct-2016 09:25

A better way of gathering and tracking multiple test runs, the behavior of the db during testing and the various parameter changes and OS level changes being tested would be very helpful.

I've occasionally worked towards getting something like that created but, so far, I just have some pretty ugly scripts and bits of code that aren't suitable for public consumption in a family friendly forum.

Posted by George Potemkin on 30-Oct-2016 09:40

We are going soon to run the tests for a customer using the modified version of readprobe. If everything will go smoothly I will put a script in public domain.

Posted by bronco on 31-Oct-2016 02:50

May I suggest using Github for projects like these (MIT license or something)?

Posted by gus bjorklund on 31-Oct-2016 15:56

I’ve been posting the archives on communities when I have a new version ready.

What is github?

Posted by bronco on 01-Nov-2016 04:11
Posted by Rob Fitzpatrick on 27-Oct-2017 13:17

I'm reviewing Gus' presentation from June this year on tuning -nap, -napmax, and -spin.  He noted his testing was done on OE 11.7 and "ATM 7".  

As I'm about to start some benchmarking, I'm wondering whether there is an available version of the ATM benchmark newer than 5.1, or perhaps a new readprobe.

Any news?

Posted by ChUIMonster on 27-Oct-2017 14:06

The newest readprobe is included with ProTop.  http://protop.wss.com

Posted by Rob Fitzpatrick on 27-Oct-2017 14:59

Found it.  Thanks Tom. :)

Posted by George Potemkin on 08-Jan-2018 10:17

Does everybody use the atm1.p (one history table) or atm4.p (four history tables) to run ATM test?

atm1.p seems to be a default option: load.p => config.program  = "atm1.p"

Are there any versions of ATM test with more than four history tables?

Posted by gus bjorklund on 08-Jan-2018 14:22

no, i never made a version with more than 4 history tables. probably would be interesting to try that. i did once try some different table and index allocation algorithms in the RDBMS though. never made it into official releases because the crash recovery part of those algos did not work and time was up.

partitioning the history might have a similar effect if the partition key matches the current key definition.

Posted by George Potemkin on 08-Jan-2018 15:05

I has asked because our customer sent me statistics gathered while running ATM test and the bottleneck was EXCLWAIT for a first block on RM chain of the history table. 100+ of 150 processes were waiting in buffer lock queue.

DB Buf X Lock: 24368 requests/sec, 6532 waits/sec (27% of requests).

At any snapshots only 10% of transactions were in Active status, 8% - in Phase 2 and the rest 82% in Begin status.

I guess we would get the similar results running the readprobe test where readalot.p would consist of one line: NEXT-VALUE(Next-Cust-Num). The number of sequence updates per sec (= SEQ latch locks / 2) would match the number of records created by atm test in the history table while the test with a sequence does not ever create the transactions.

That is why I believe atm test should use many copies of the history table. Unfortunately I can't verify my "theory" in my own tests.

Posted by mfurgal on 08-Jan-2018 15:14

I had a modified version where I had multiple history tables, but it was only because I was impatient.  I put history1 in storage area history1 and history2 in storage area history2 …  Then I had the first run write to history1, the second run to history2, etc.  This got rid of having to delete all the history records between runs.  I seem to recall that the 6 minute run (30 sec warmup and rundown and 300 second run) would take 20+ minutes to delete the history rows generated because the delete was single-threaded.

Mike
-- 
Mike Furgal
Director – Database and Pro2 Services
PROGRESS Bravepoint
617-803-2870 


Posted by gus bjorklund on 08-Jan-2018 15:20

I have seen two major bottlenecks in atm and quite a few other applications: one is that rm chain one that you observed and the other is on BKEX wait for index blocks while creating index entries. the index waits are more common because usually there are multiple indexes on a table so you allocate record space once but create index entries multiple times.

while there is lock overhead for sequence increments, it is much less than for space allocation.

transactions that are in begin status have not yet made any database changes and but i guess they could incur rm chain waits while looking for space to make their first database change.

i did see performance gains from using 4 history tables in atm. have not done so for a while though.

Posted by Richard Banville on 08-Jan-2018 15:21

Not to mention that the insertion into the history table would be different for each run since the rm chain would be so different.  This could affect TPS count. You should also avoid database extends since that also will affect the consistency of the 4 runs.

Posted by gus bjorklund on 08-Jan-2018 18:02

mike,

the delete is quite fast when you do it using the “table-scan” option with type ii data areas.

This thread is closed