Size concerns with an index area

Posted by adamk84 on 25-Feb-2010 09:29

Hi,

Recently our database index area has exceeded its threshold and we are looking to see what possible solutions are out there to reduce the footprint that our indexes are currently using. I read in some of the documentation that idxcompact or idxmove would be an option. Most of our indexes have a factor of 1.8 so my guess is that the idxcompact is not going to give the best results.

If we seperated it into multiple indexes would that give the best results? Anyone have any other thoughts?

Running on Unix, with OpenEdge 10.1B

Thanks,


Adam

All Replies

Posted by ChUIMonster on 25-Feb-2010 10:28

It would be helpful to see the dbanalys output for the storage area which concerns you.

Posted by adamk84 on 25-Feb-2010 11:29

Just to give you a bit of a background of how this came up, we got the following error message in our db log:

SYSTEM ERROR: Unable to extend database within area General_Index.

Now, the log file that I have attached does not really show all the values as I cleared out data that was not related also, there was an extent put in to get around the initial error. I am just looking to have suggestions on what can be done to prevent such errors from occuring in the future.

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/26/db_5F00_analysis_5F00_2010.txt.zip:550:0]

Thanks,

Posted by ChUIMonster on 25-Feb-2010 11:45

What is the cluster size and the rows per block for the General_Index storage area?  Actually, the relevant .st file entries would be of interest.

The summary claims that there is 1.1GB of record data.  Yet no tables with records are shown?

Is the a LOB stored in this area?

In general, to prevent "unable to extend" issues:

1) Always have a variable length extent.

2) Enable "large files".

3) Have plenty of free disk space on the filesystem

4) Monitor your storage areas and filesystems.  (Shameless plug: http://dbappraise.com)

Posted by gus on 25-Feb-2010 13:01

As you add data to your tables, the indexes will grow. When indexes are modified (entries added and/or updated), their space utilization normally declines somewhat (though there are some cases where the space utilization can decline a lot). Index compact can squeeze out the free space, but you have to have some free space or performance will suffer. When you compact an index it will grow again afterwards if new entries are added. Index compact has a parameter to specify how tightly to pack the index entries. We recommend setting that at 80 % (leaving roughly 20% free in each block).

Moving indexes among areas doesn't have any significant effect on the amount of space they take up.

In most situations there isn't much you can do to significantly reduce the amount of index space that is needed other than by getting rid of them (assuming you have indexes you don't need) or by deleting data (assuming you have data you don't need). But you need indexes for good query performance and to enforce uniqueness for certain values.

As Tom said, you should make sure you have expansion space by allocating additional extents to the area and by having a variable extent in case the allocated space gets used up.

Posted by adamk84 on 25-Feb-2010 13:19

In the .st file we have

"General_Index":14,64;64 p1_14.d1 f 524288

"General_Index":14,64;64 p1_14.d2 f 524288

And I left out those values as I was mostly wanting to know what generally would be considered best practices.

Having the extent set to variable would likely be the best item to consider.

I have used your ProTop on occasion, can come in nice and handy specially the GUI designed one

Thanks for the rest of your suggestions.

Posted by ChUIMonster on 25-Feb-2010 13:37

You also left out the detailed index analysis

The reason that I ask is because if you have any indexes that are very small you might be wasting some space on them.   With 64 blocks per cluster and 8k blocks you are allocating a minimum of 512 blocks -- at worst you could only be wasting about 10MB in this case (21 indexes) amd on average you would be wasting roughly 5MB.  (But if you had used 512 blocks per cluster it could be 8x worse -- still that's less than 100MB so just a small protion of the total.)

So the main open question, to me, is what is the 1.1GB of records?  Is it real?  Or is something else going on?  If it is real then, given the area name, the tables involved probably ought to be moved.  Especially as it is 80% of the space being used in that storage area.

As for "best practice"... I would probably use either one big variable extent with large files enabled or I would create a larger fixed extent and an additional overflow variable extent.

(Thanks for the nice words about ProTop!)

Posted by adamk84 on 25-Feb-2010 14:15

I Put in a more detailed outline of the indexes*. I have removed indexes from the list that had less then 2 mb in size. I also included a clip out from the block summary section for the General Data area. The extents in the data area are also fixed length at 524288. One thing to add is that our database has not been D&L'ed for quite some time so this may be why that storage space is so large.

When you say "one big variable extent" what size of extent would you recommend? between 512 and 1024 (mb)? Same question applies for when you say a larger fixed extent?

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/26/db_5F00_analysis_5F00_20102.txt.zip:550:0]

Thanks

Posted by ChUIMonster on 25-Feb-2010 14:42

In the scheme of things this isn't "so large".  It's actually fairly small.  So don't get too concerned on that account.

Dumping and loading, or compacting indexes, could reclaim some space and might be beneficial.  I can't say if it would be worth it or not but it could be -- you do have quite a few indexes whose "utilization" is quite low.

If you would rather do fixed extents, and assuming that  you do not change anything else (like move the 1.1GB of data records  from an area labelled "indexes") I would create a 2GB fixed extent and a  variable overflow extent.  Why 2GB?  I like to have plenty of room for  growth -- so I usually pick a power of 2 based size that will result in  the storage area being somewhere between half and three-quarters full.   And I like to keep the number of extents small so I try to use a few big  extents rather than many small ones.

By "one big variable extent" I mean just have a single extent which is variable and let it grow to whatever size it wants to be.  With large files enabled you don't need to worry about adding extents -- only about making sure that the filesystem doen't fill up.  It is less to monitor and manage -- which can be an advantage if that isn't your forte.  Or you could visit http://dbappraise.com and see all about letting me do it for you -- because that is my forte

This thread is closed