Openedge Db Reduction In Performance On Solaris 11

Posted by jsachs on 03-Feb-2016 05:47

One of our clients have applications that mostly do a lot of record reading on Large OpenEdge databases.
Their warehouse application db's are almost 500GB in total and their financial application db's are around 200GB in total.
the OS and database LUNs reside on SAN storage(this applies to their old Solaris 10 and new Solaris 11 kits), both kits and db's run on Solaris Containers.

The symptoms are that on Solaris 11, using either ZFS or VXFS file systems under load reading records from the db's are extremely slow. Running the databases on slow internal server disks proves to be faster.
The symptoms does not appear to occur when using Solaris 10 on the new kit either using VXFS or ZFS.

They are using OpenEdge 10.2B hf36 32Bit.
The Solaris Versions that are used are:
- Solaris 10 update 8 with August 20114 recommended patches
- Solaris 11.2, upgraded to 11.3 with latest available patches.

Is there perhaps anyone running the same config successfully that can assist?

All Replies

Posted by James Palmer on 03-Feb-2016 05:58

This isn’t a solution to your problem, but you should definitely look to move to 64bit executables on your server. 

Posted by ChUIMonster on 06-Feb-2016 08:18

Even "slow" internal disks are fast compared to a "high performance SAN" (which is an oxymoron...  sort of like "jumbo shrimp" only worse).  Ordinary "SAN storage" is, of course, simply pathetic when it comes to disk performance.

Solaris and ZFS have lots of knobs to twist and provide plenty of rope to hang yourself with.  From your description it sounds like something might have been adjusted in your Solaris 10 configuration but has been overlooked in Solaris 11.  But I have no idea what that might be.  You might try some IO performance tests that take OE out of the equation -- that can often help the storage people to get past the "blame Progress" phase and start to take the problem seriously.

Sometimes you just have to live with bad IO performance.  In that case you might have some success by tuning Progress to avoid IO as much as possible.  Which means that you will need lots of RAM.

This is where the 32 bit executables are killing you.  That limits you to 2GB of RAM for the buffer pool (the -B db startup parameter).  In this day and age and with databases that large 2GB is minuscule and it is no wonder that you have poor performance that is sensitive to the performance of the underlying disks.

Avoiding IO is a good idea in any case but it is really critical when IO performance is sub-par.  The main IO reducing tunable available from the db perspective is -B.  Since you mention that the issue seems to be reading records under load that is probably your best bet.  (If the issue is actually CPU rather than disk IO then -lruskips  might be useful.)  Depending on how the clients doing the reading connect and what they do you might also benefit from tuning some of the client startup parameters, especially -Bt if temp-tables are part of the problem.  If the clients are connecting with -S then you should also look into -Mm and the -prefetch* parameters.  And ensure that all of the Solaris network settings are optimal.  "Jumbo frames" can be helpful.

Posted by ChUIMonster on 07-Feb-2016 09:18

Another possibility is that your connection to the SAN might have multiple adapters or IO paths that are supposed to be load balanced.  It is quite common for that to get messed up and to find that all of your IO is going over just one of your adapters instead of being spread across them.  It's the sort of thing that is often overlooked or misconfigured when rolling out a new system.  

This thread is closed