DB-conversion to SAII - Advice on tables that have a lot of

Posted by Steven Peeters on 13-Feb-2018 01:35

We will soon be converting our "all-in-schema-area" Type I SA database to a Type II SA with the tables and indexes grouped per RPB-value (as calculated using the DBAnalys).

Most tables in this DB are fairly ordinary except for 2 tables that we use to for our custom-made CDC system.
These tables will have a lot of creates and deletes in a short amount of time, offloading the records to an external DB.

Now that we're in the process of examining the conversion, I was wondering whether or not I should place these tables in a seperate area, assign a specific RPB or cluster value?

Regards,

Steven

All Replies

Posted by Gareth Vincent on 13-Feb-2018 02:33

Hi Steven,  the rule of thumb is to separate all high processing tables into their own storage areas along with its own index area.  The RPB will vary per table so make sure you allocate tables accordingly.  I usually stick to either a cluster size of 512 for large tables and 8 for everything else.

For all other tables I would group them based on their RPB and cluster size.  

Posted by James Palmer on 13-Feb-2018 03:15

I've spent significant time bashing this out for our application recently.

The trouble with separating out tables based on their RPB setting is working out what to do when the use of said table inevitably changes. Also it's a lot of hard work setting up all the different areas for all the piddly little tables that are hardly used.

In the end, I've got the following structure:

Data: Cluster 8, RPB 512 -> For most of the tables

Index: Cluster 8, RPB 64 -> For the indexes of the above

OrderData: Cluster 8, RPB 512 -> For 8-10 heavily used tables

OrderIndexes: Cluster 8, RPB 64 -> For high use indexes

Transactions: Cluster 512, RPB 512 -> For a single table that is 60% of the database

TransIndex: Cluster 8, RPB 64 -> For the indexes of the above

There's probably some little tweaks I'd make to the above if I was doing it again, but overall it's a very simple structure, and it's also very simple for architects to decide where to put new structures.

IMO the gains from overcomplicating this aren't sufficient in most cases to make it worthwhile.

My 2p.

Posted by Gareth Vincent on 13-Feb-2018 03:55

I agree, it can get over complicated, especially when dealing with a database with over a few hundred tables.  Below is an extract of the ST file i'm using for one of our DB's (over 500 tables).  Table 1 - 15 have been highlighted as extremely high processing tables.  

For the others i've grouped on a per RPB/Cluster (bottom of ST file).  I've also added a Misc storage area for the rats and mice and a separate area for BLOBS.

This takes some careful planning and like James mentioned can change over time.

d "Misc Data 128":9,128;8 db_9.d1

#

d "Misc_IDX_Area":10,64;8 db_10.d1

#

d "table1":11,64;512 db_11.d1

#

d "table1_IDX_Area":12,64;8 db_12.d1

#

d "table2_Area":13,64;512 db_13.d1

#

d "table2_IDX_Area":14,64;8 db_14.d1

#

d "table3_Area":15,64;512 db_15.d1

#

d "table3_IDX_Area":16,64;8 db_16.d1

#

d "table4_Area":17,32;512 db_17.d1

#

d "table4_IDX_Area":18,64;8 db_18.d1

#

d "table5_Area":19,128;512 db_19.d1

#

d "table5_IDX_Area":20,64;64 db_20.d1

#

d "table6_Area":21,16;512 db_21.d1

#

d "table6_IDX_Area":22,64;8 db_22.d1

#

d "table7_Area":23,128;512 db_23.d1

#

d "table7_IDX_Area":24,64;8 db_24.d1

#

d "table8_Area":25,64;512 db_25.d1

#

d "table8_IDX_Area":26,64;8 db_26.d1

#

d "table9_Area":27,64;512 db_27.d1

#

d "table9_IDX_Area":28,64;8 db_28.d1

#

d "table10_Area":29,32;512 db_29.d1

#

d "table10_IDX_Area":30,64;8 db_30.d1

#

d "table11_Area":31,128;512 db_31.d1

#

d "table12_IDX_Area":32,64;8 db_32.d1

#

d "table13_Area":33,64;512 db_33.d1

#

d "table13_IDX_Area":34,64;8 db_34.d1

#

d "table14_Area":35,32;512 db_35.d1

#

d "table14_IDX_Area":36,64;8 db_36.d1

#

d "table15_Area":37,32;512 db_37.d1

#

d "table15_IDX_Area":38,64;8 db_38.d1

#

d "128-512C":39,128;512 db_39.d1

#

d "16-8C":40,16;8 db_40.d1

#

d "16-512C":41,16;512 db_41.d1

#

d "32-8C":42,32;8 db_42.d1

#

d "32-512C":43,32;512 db_43.d1

#

d "64-512C":44,64;512 db_44.d1

#

d "64-8C":45,64;8 db_45.d1

#

d "256-8C":46,256;8 db_46.d1

#

d "256-512C":47,256;512 db_47.d1

#

d "BLOB_area":48,1;512 db_48.d1

Posted by James Palmer on 13-Feb-2018 03:57

And of course, if you start adding multi-tenancy to the database it can get even more complicated in no time.

Posted by Rob Fitzpatrick on 13-Feb-2018 07:15

> Data: Cluster 8, RPB 512 -> For most of the tables

256?  ;)

Posted by cjbrandt on 13-Feb-2018 08:06

Start your table area number higher - like at 100, so the AI extents can use the area numbers 7 - whatever.  Then the AI extents aren't scattered through the structure file.

Make table area numbers even and the index area numbers odd.  It helps when you area scanning through the db log and see a data error, you know if it impacts an area with table data or just an index.

Make a list of gifts you will accept, because moving tables from schema area to separate SAII is going to make you very popular in your company.

Posted by ChUIMonster on 15-Feb-2018 05:03

Tables with lots of creates aren't a problem per se.  But tables that get a lot of updates after the create, and where the records tend to grow over time can need some special consideration.  Those tables might fragment and there could be issues with long "rm chains".  To address those sorts of issues you may want to consider adjusting the create and toss limits.

Posted by George Potemkin on 15-Feb-2018 05:14

> you may want to consider adjusting the create and toss limits.

Only if the records are growing over time. If the updates change the record size up and down then the create/toss limits will be unable to prevent the long RM chains. We need the tunable parameters to control the changes at the tail of RM chains. Unfortunately Progress uses the hard coded paramters in this area.

Posted by gus bjorklund on 15-Feb-2018 12:48

> On Feb 15, 2018, at 6:16 AM, George Potemkin wrote:

>

> parameters to control the changes at the tail of RM chains.

what parameters do you mean?

Posted by George Potemkin on 15-Feb-2018 13:14

> what parameters do you mean?

When we update a record in a block that is not on RM chain the block can be added either to the head or to the end of the chain. As far as I understand it depends from the number of the free slots and from the free space in the block.

I had compared the list of the blocks on very long RM chain reported by a daily chanalys. So it was possible to see which blocks were moved from the chain's head to its end and which blocks that were not on the chain were added to chain's end. The last ones were a main reason of the chain's growth. The create/toss limits were unable to shring the chain.

Posted by Dmitri Levin on 28-Feb-2018 17:19

create limit affects fragmentation.

toss limit affects RM chain.

Before the testes we were under impression that setting "correct" toss limit will cure long RM chain, but that did not happen. By "correct" I mean toss limit equal to the average record size and I even tried setting it 10-20% bigger than that.

Some applications as Tom mentioned above create a record and then update it a little time after create. So the block gets during create on RM chain, a long one. Then shortly after it grows to the point that it does not belong there. But because RM chain is in millions of blocks it stays there very long time. That happen because blocks are removed only from the head of RM chain. By the time that block will be removed from the head of the chain RM chain will have millions of other blocks added at the end and then they grew to the point when they do not belong there.

The above could be considered as bad application design. True.

This thread is closed