Can I use the -Mm in a database startup parameter file? Or do I need to change the setting in the DLC (startup.pf file)?
I prefer putting it in the db.pf. I have never put it in $DLC/startup.pf.
I guess I never really thought about putting it in db.pf.
however, for testing, I can now test performance
I'm going to use this ABL program to test throughput .
def var x-start-time as int init 0.
def var x as int init 0.
def var y as int init 0.
y = 1.
x-start-time = TIME.
FOR EACH glet where cono = 1 NO-LOCK:
y = y + 1.
END.
MESSAGE
"Total glet Searched: "
y
SKIP(1)
"Total Seconds: "
(time - x-start-time)
(time - x-start-time) / 60
SKIP(1)
"Average Items read per second: "
(y / (time - x-start-time))
VIEW-AS ALERT-BOX.
The -Mm parameter has to be the same for all databases in the session, therefore I suppose the startup.pf is as good a place as any. Although for readability it probably makes sense to specify it for each database.
Interestingly, from 11.6 onwards the parameter can be different for each database in the session, and you don't have to specify it at the client side. The client negotiates the correct parameter with each database on connect.
> I'm going to use this ABL program to test throughput .
It'd be a good idea to start a few login brokers for the same db - each broker with its own value of the -Mm. Connect them using the different sessions. All of them will use db buffer pool with exactly the same content but with the different -Mm. It works before V11.6 as well.
Understand, however, The startup.pf is more of a global change. I do like that. You don't have to make changes in 20 different database env and change the client startup scripts.
Make sure to test joins and finds within the for each loops to get a more accurate estimate of the performance improvements. Joins and finds inside a for each loop are going to vastly change how many packets and round trips are required. It is just the nature of how OE handles client server communication.
If you are running a semi modern version of OE you should look into the prefetch settings. Many times you can get more improvement from adjusting these than -Mm alone.
pretty good results - My version of OE 11.4 enterprise.
-Mm 4096
Default params:
Table: GLET
Total glet Searched: 109220299
Total Seconds: 1151 19.1833333333
Average Items read per second: 94891.6585577758
-Mm 8192
Table: GLET 26% increase 19 minutes difference 14 minutes
32% increase 125109 difference 948891
Total glet Searched: 109220299
Total Seconds: 873 14.55
Average Items read per second: 125109.1626575029
try -Mm 8192 and 16384
Next test I will try 16384
here are the results for 4096 Mm and 8192 Mm
-Mm 4096
Default params:
Table: GLET
Total glet Searched: 109220299
Total Seconds: 1151 19.1833333333
Average Items read per second: 94891.6585577758
Table: ZARZEI
Total zarzei Searched: 337563
Total Seconds: 7 .1166666667
Average Items read per second: 48223.2857142857
-Mm 8192
Table: GLET 26% increase 19 minutes difference 14 minutes
32% increase 125109 difference 948891 – reads per second
Total glet Searched: 109220299
Total Seconds: 873 14.55
Average Items read per second: 125109.1626575029
-Mm 4096
Default params:
Table: ZARZEI
Total zarzei Searched: 337563
Total Seconds: 7 .1166666667
Average Items read per second: 48223.2857142857
64228
-Mm 8192
Table: ZARZEI 57% increase 7 seconds difference 3 seconds
133% increase 112521 difference 48223 – reads per second
Table: ZARZEI
Total zarzei Searched: 337563
Total Seconds: 3 .05
Average Items read per second: 112521
here are the results for 16384 - our MTU is only 15K
-Mm 16384 vs –Mm 8192
Table: GLET 10% increase 19 minutes difference 17 minutes
-15% decrease 108353 difference – 125109 reads per second
Total glet Searched: 109220299
Total Seconds: 1008 16.8
Average Items read per second: 108353.4712301587
> our MTU is only 15K
"only" 15K ? what NIC and/or infrastructure you have? or did you mean 1500, ie default ?
1500 it what mean.
Did you warm up a buffer pool before running the tests? Did database read data from disks during the tests? What was a buffer hit ratio?
On purpose I did not warm up the buffer pool, I shutdown and truncated the database for each test run. I did not look at disk or buffer hit ratio - however, the next run I will warm up the buffer pool and check disk, buffer hit ratio.
I know this is not apples to apples, however, there is a increase in performance with the -Mm 8192.
An increase in performance with the bigger -Mm is an expected behavior but the tests should give the numbers that describe the performance of network. BTW, it's better to run a fastest test first. In other words, I'd recommend to start with the large -Mm and than to decrease its value.
I do not understand - what tool should I use? I mean data coming back from the database to the client is a metric of the network and database to the client.
Data are coming from disk to a file system cache, then to a db buffer pool, then through network to a client. If during all tests you'll read a limited set of data (you can read the same data over and over again) that entirely fit into a db buffer pool then you will measure a network performance only. Otherwise the other uncontrolled factors will affect the results.
I understand that, I thought you had another tool that I should use to measure network performance only.
thanks
Chuck
BTW, does anybody test the performance difference between localhost vs real hostname?
I have not.
On 2/18/16, 1:54 PM, "George Potemkin"
wrote:
>BTW, does anybody test the performance difference between localhost vs
>real hostname?
>
on a decent operating system, they should be the same.
on a half-decent operating system, localhost should be (much) faster.
On 2/18/16, 12:03 PM, "ctoman"
wrote:
>I know this is not apples to apples, however, there is a increase in
>performance with the -Mm 8192.
>
in my testing, best results were with Mm 8192 and ethernet jumbo frames
enabled
but: that was with oe v10.
how do I get the network guy to use jumbo frames?
call him up and tell you need them. :)