Progress performance on Nimble storage arrays, and arrays in

Posted by cverbiest on 11-Apr-2016 06:37

Last week one of my colleagues did a test with a Nimble Storage array.

He tested with a database and cpu intensive process. It contains 3 steps :

  • delete all previous results from database,
  • calculatie new results,
  • write resultsto database

Reference :

In our tests we used the same Windows Virtual machine based on Windows Server 2012 R2. This VM has 4 vCPU’s, 16GB ram and a 100 GB dynamic VHD. The physical server the VM is running on, is a HP DL380 Gen9 machine. The database is started using “proserve <databasename> -S 11000 –B 150000”. The Progress version used is 11.4 32-bit.

Case 1: The VHD lives on a Nimble volume. This Nimble volume is connected through Microsoft iscsi initiator using 3x1Gbit copper wires.

Duration: 53’

Case 2: The VHD exists on local storage (4x600GB SAS 15K disks in RAID10 with default HP settings using a P440ar RAID controller with 2GB cache 50% read 50% write)

Duration: 36’

Case 3: as case 1, on Nimble, but with 2 APW’s and 1 BIW

Duration: 42’

Case 4: as case 2, on RAID 10, but with 2 APW’s and 1 BIW

Duration: 19’

The local RAID 10 disks outperform the Nimble storage array.

Are we overlooking some settings that should be changed for such a setup ?

There was an engineer from Nimble present and he stated that this was a Progress problem because it is only single threaded.

The Nimble array was, as far as their eingineer is concerned, idling most of the time. It should not be a bottleneck yet our process is noticeable slower when the system is on that array.

Are we doing something wrong ?

All Replies

Posted by Libor Laubacher on 11-Apr-2016 07:05

First you said nothing about the Nimble setup. Raid and cache.

Let's assume that the disks and raid on the Nible are the same. It's bound to be slower, as you have introduced network attached storage compare to local one. Please note: I am not saying that SAN/NAS can't be faster that local drive in some cases, I am working with emphasis on "assume the RAIDs and disks are the same" here.

With this assumption, I'd expect that to be slower, maybe not that much slower, sounds like your isci setup/cabling might have an issue. Is the ISCI offloading on? Soft / HW scsci ?The Nimble array being idle might mean problem with the pipe.

I am also not sure what being single CPU single threaded have to do with Nimble, but that's another story.

Posted by Paul Koufalis on 11-Apr-2016 07:05

I think you are mixing up speed and throughput.These results sounds normal to me as the bottleneck is not likely the disk. As for your Nimble engineer: it's not Progress that's single threaded, it's your benchmark that's designed to be single threaded. If you want to exercise the Nimble then run 10 of these benchmarks in parallel.

Some quick points off the top of my head:

1. The p440ar has a 12 Gb/sec bandwidth vs your iSCSI which maybe has max 3 Gb/sec

1a. Watch how much data is going down each pipe. Are all three being used at the same time?

2. You dont mention the size of the table. We don't know how much is being cached before actually being written

3. You don't mention BI Clsz, BI blocksize, bibufs. All these things affect your throughput

4. If you watch the ckpt screen look at the last couple of columns showing sync duration and such. I'd like to know those numbers

5. Are you running your test in C/S ?

6. Run Mike Furgal's write test: set BI Clsz to 256 Mg and bigrow 2. Time how long it takes to create a 1.5 Gb BI

Posted by cverbiest on 11-Apr-2016 07:16

Hi Paul, Libor,

thanks for your responses

Unfortunately I was not involved in the test. I have asked to run the ATM benchmark but that has not been done yet.

The Nimble claim is that it's so much faster than local storage so our managment asked to take a know system with known performance (the job always takes +/- 36minutes) put everything on the Nimble storage and see how long it takes.

Posted by gus on 11-Apr-2016 08:18

You have your answer:

36 minutes versus 19 to run the same job, which is very write intensive.

The Nimble droid's assertion that this is my fault is nonsense.

Ask him if he has any recommendations for improving the Nimble array's performance.

This thread is closed