proutil increaseto

Posted by Rob Fitzpatrick on 31-May-2019 19:01

I'm going over my notes from NEXT today and thinking about the discussions around the road to high availability in 12.x, and how proutil increaseto plays a part in that.  In short, if some database restarts are done today to increase startup params, and if in the future those params can be adjusted online, then fewer restarts should be required.

But I think that assumes an increased param's data structure is in the same state as it would be had it been that size since the database was started.  Examples:

  • increaseto -B n increases the size of the buffer pool but does not also increase the size of the buffer hash table.  A database who buffer pool was increased online by a large amount could experience greater BHT latch contention than if it had been started at the larger amount. The same applies to increaseto -B2 n.
  • similarly, increaseto -L n increases the size of the lock table but does not also increase the size of the lock hash table to the size it would have been had the database been started with the larger -L value.
  • increaseto for cache-related parameters (omsize, mtpmsize, cdcsize) only increases the size of the secondary caches, which doesn't yield the same benefit as a database start that makes the primary cache that size.

Just wondering whether there are roadmap plans for proutil increaseto, not just to add the ability to use it with more parameters, but to make it a better substitute for a database restart.

All Replies

Posted by Rob Fitzpatrick on 31-May-2019 19:39

Sorry, wrong group.  Please move this thread to RDBMS.

Posted by Richard Banville on 31-May-2019 19:42

These have been considered but are on the "back burner" for now.  

The changes to a hash table would be quite expensive.  First the entire hash table need to be latched and all entries need t be re-hashed since our hashing technique is not unique, it is bucket and chain.  It is difficult to manage where a pointer to a hashed entry already exists in running code.  Updating the size of a hash table is a very delicate and risk thing to do.  Not that we shouldn't ever do it, but I saying that in the mean time we are trading off avoiding very expensive I/O with an increase of the buffer pool to a much less expensive hash lookup and possible collision.  That ratio is tiny.

We do need to come up with a practical way to do this but it is much lower on the priority list of things to do for HA.

We do have a plan to improve the om cache by migrating entries from the secondary to the primary cache if the primary cache grows.

Entries then could be found in the primary even if there are duplicates in the secondary.  Once the primary is updated, the secondary could be cleaned up of the duplicate entries.

This thread is closed