OpenEdge 10.1c on Sun Solaris SPARC 64Bit with ZFS

Posted by 302218 on 04-Jul-2011 04:59

We will get a new machine for our OpenEdge database with ZFS file system (this is the standard configuration in my company).

I heard sometime in the past that one should use only fixed length extents with the ZFS file system because variable length extents may cause performance problems - especially with the after image. Does anybody know whether this is true. If that is not true I would prefer variable length over fixed extents for the after image.

Thanks in Advance and Best Regards,

Richard.

All Replies

Posted by gus on 06-Jul-2011 10:18

I don't know about ZFS specifically but variable-size after-image extents work well on other filesystems. There is sometimes a /small/ performance advatage to fixed extents in large installations with lots of database update activity. Unless you have one of those situations, you will not notice the difference.

ZFS is an unusual filesystem, different from most others. If you plan to use it for database storage, you should learn how to manage it and you should make sure you have all the patches from Sun. I have no direct experience with ZFS but a guy I know who uses ZFS told me the following:

- ZFS raidz is wonderful in spreading the writing and reading around multiple disks.    This allowed us to get performance increase of 3x of handling data compared to our  previous system with UFS with same number of logical devices.

- ZFS allows to set record size at the file system level and also compression is    available to pools or datasets. This how we have been able to save space with   backup storage and AI archive.

Currently our AI archive disk is showing 2.03x compression rate and    for backup storage dataset is 1.88x. Performance of these disk is still reasonable.

- If managed correctly, it is wonderful.  Now I did say correctly and what do I mean by that?

1.) Should the raidz zpool require expanding, expanding should either done early on when there is quite a lot space left or      it should be done with sufficient number of logical devices.

If the zpool is expanded using one or two devices and space is low,     most of the writes concentrate on the new device(s).      This will cause major problems with database in situations where there are     lots of changes. Just changing of one integer value for multiple hundred      records can cause the system to freeze.

Expansion part of the process is great on the part that is easy and fast,     if  that the devices are available to the server.

2.) If zpools free space goes below 5%, performance will degrade.

3.) Do not set primary cache option of zpool to "metadata only"       or performance will be dreadful.

4.) Following options should have the same value in /etc/system.      ssd:ssd_max_throttle  and    zfs:zfs_vdev_max_pending

5.) ZFS ARC (cache) maximum size should be limited with        set zfs:zfs_arc_max =  in /etc/system.      So ARC will not eat all the free memory.

6.) Using flash devices (SSD) as L2ARC device can improve performance even further.      For further information see http://blogs.sun.com/brendan/entry/test

HTH

Posted by 302218 on 06-Jul-2011 23:11

Thank you very much.

That is exactly the answer I was hoping to get.

Thanks and Best Regards,

Richard.

This thread is closed