Buffer the whole database

Posted by peggycole on 09-Sep-2014 10:30

Hi,

One of our customers asked us why we don't put the whole database in memory at start up of the database. His idea is that when the whole database is buffered, his application would be x times faster. We're talking about a database with the size of +/- 35 GB. 

The applications works with OE 10.2B 32-bit. If we decide to do the buffering we have to move to OE 11 64-bit because the _mprosrve-processes will become bigger then 4GB. The migration to 64-bit would take us some time because of the use of multiple ocx's in the applications.

So before we even try to set this up: do you think this is a good idea?

IMO it isn't but I want to give it a try if the customer asks for it. Even if it is to proove it's not working. But because of the extra work it will cost considering the ocx's, i'm not very keen to start with this exercise if we know in advance.it will give us no gain. 

We do have some performance issues in the application and I think it 's better to fix them instead of doing some hocus pocus  to cover them. But I want some decent arguments for that ...

Another point against making the buffer as big as the database:
The buffer has to fill itself initially. We can write queries on all the different tables and indexes after a restart but that just doesn't feel right and I just do not want to do this. Any comments?


Kind regards,

Peggy

Posted by ChUIMonster on 09-Sep-2014 10:44

Yes, you can put the whole DB in memory.

You would need to run the db client/server rather than shared memory but the db could run with 64 bits and the clients remain 32 bit.  That would save you the time required to get your OCX's and such changed.  And since this sounds like Windows there is a pretty good chance that you are already setup that way.

Run a "dbanalys" to load the whole db into memory without actually writing all of those queries ;)

You can also almost certainly get just as good performance with some pretty basic tuning.

 
On 9/9/14, 11:31 AM, peggycole wrote:
Thread created by peggycole

Hi,

One of our customers asked us why we don't put the whole database in memory at start up of the database. His idea is that when the whole database is buffered, his application would be x times faster. We're talking about a database with the size of +/- 35 GB. 

The applications works with OE 10.2B 32-bit. If we decide to do the buffering we have to move to OE 11 64-bit because the _mprosrve-processes will become bigger then 4GB. The migration to 64-bit would take us some time because of the use of multiple ocx's in the applications.

So before we even try to set this up: do you think this is a good idea?

IMO it isn't but I want to give it a try if the customer asks for it. Even if it is to proove it's not working. But because of the extra work it will cost considering the ocx's, i'm not very keen to start with this exercise if we know in advance.it will give us no gain. 

We do have some performance issues in the application and I think it 's better to fix them instead of doing some hocus pocus  to cover them. But I want some decent arguments for that ...

Another point against making the buffer as big as the database:
The buffer has to fill itself initially. We can write queries on all the different tables and indexes after a restart but that just doesn't feel right and I just do not want to do this. Any comments?


Kind regards,

Peggy

Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com

All Replies

Posted by Mike Fechner on 09-Sep-2014 10:34

How are clients connected to the DB?

AppServer?
Self service?
TCP?

Unless clients are self service, a 32 bit client can connect to a 64 bit DB or AppServer.

Von meinem Windows Phone gesendet

Von: peggycole
Gesendet: ‎09.‎09.‎2014 17:30
An: TU.OE.Development@community.progress.com
Betreff: [Technical Users - OE Development] Buffer the whole database

Thread created by peggycole

Hi,

One of our customers asked us why we don't put the whole database in memory at start up of the database. His idea is that when the whole database is buffered, his application would be x times faster. We're talking about a database with the size of +/- 35 GB. 

The applications works with OE 10.2B 32-bit. If we decide to do the buffering we have to move to OE 11 64-bit because the _mprosrve-processes will become bigger then 4GB. The migration to 64-bit would take us some time because of the use of multiple ocx's in the applications.

So before we even try to set this up: do you think this is a good idea?

IMO it isn't but I want to give it a try if the customer asks for it. Even if it is to proove it's not working. But because of the extra work it will cost considering the ocx's, i'm not very keen to start with this exercise if we know in advance.it will give us no gain. 

We do have some performance issues in the application and I think it 's better to fix them instead of doing some hocus pocus  to cover them. But I want some decent arguments for that ...

Another point against making the buffer as big as the database:
The buffer has to fill itself initially. We can write queries on all the different tables and indexes after a restart but that just doesn't feel right and I just do not want to do this. Any comments?


Kind regards,

Peggy

Stop receiving emails on this subject.

Flag this post as spam/abuse.

Posted by peggycole on 09-Sep-2014 10:36

Hi Mike,

The clients makes the connection to the database via TCP

Posted by ChUIMonster on 09-Sep-2014 10:44

Yes, you can put the whole DB in memory.

You would need to run the db client/server rather than shared memory but the db could run with 64 bits and the clients remain 32 bit.  That would save you the time required to get your OCX's and such changed.  And since this sounds like Windows there is a pretty good chance that you are already setup that way.

Run a "dbanalys" to load the whole db into memory without actually writing all of those queries ;)

You can also almost certainly get just as good performance with some pretty basic tuning.

 
On 9/9/14, 11:31 AM, peggycole wrote:
Thread created by peggycole

Hi,

One of our customers asked us why we don't put the whole database in memory at start up of the database. His idea is that when the whole database is buffered, his application would be x times faster. We're talking about a database with the size of +/- 35 GB. 

The applications works with OE 10.2B 32-bit. If we decide to do the buffering we have to move to OE 11 64-bit because the _mprosrve-processes will become bigger then 4GB. The migration to 64-bit would take us some time because of the use of multiple ocx's in the applications.

So before we even try to set this up: do you think this is a good idea?

IMO it isn't but I want to give it a try if the customer asks for it. Even if it is to proove it's not working. But because of the extra work it will cost considering the ocx's, i'm not very keen to start with this exercise if we know in advance.it will give us no gain. 

We do have some performance issues in the application and I think it 's better to fix them instead of doing some hocus pocus  to cover them. But I want some decent arguments for that ...

Another point against making the buffer as big as the database:
The buffer has to fill itself initially. We can write queries on all the different tables and indexes after a restart but that just doesn't feel right and I just do not want to do this. Any comments?


Kind regards,

Peggy

Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com

Posted by Mike Fechner on 09-Sep-2014 10:57

Hi Peggy, as you know Tom is in the wider neighborhood in November anyway and always open for consultancy engagements after PUG Challenge conferences.
 

 

Posted by Mike Fechner on 09-Sep-2014 10:57

So continuing to use the 32 bit client talking to a 64 bit server would not be a problem.
 

Posted by TheMadDBA on 09-Sep-2014 10:58

This is going to be a great example of when theory meets the real world :)

Yes you could load everything into -B and or OS buffer cache when the db is started.  Like Tom said though, any performance problems are likely due to the code being run client server, the code itself, hardware config and DB settings.

Posted by Thomas Mercer-Hursh on 09-Sep-2014 11:10

Note that if you have the database reasonably tuned now, you are probably fulfilling 98% of your requests from the -B buffer pool anyway, so any improvement of getting the rest of the database into the buffer is likely to be small.  If you don't have it reasonably tuned, then you can make dramatic improvements without putting the whole DB in there.  And, of course, to get the whole thing in -B means having to read every record, which means start up impact.

Posted by ChUIMonster on 09-Sep-2014 12:10

Thanks Mike!

Yes, that's true -- I am always happy to help in a professional capacity
if something more involved than a simple forum posting is required.

--
Tom Bascom
603 396 4886
tom@greenfieldtech.com

Posted by Rob Fitzpatrick on 09-Sep-2014 20:19

I wouldn't worry overly about warming up the cache.  Database startup isn't something that happens terribly frequently; at least, it shouldn't.  If you really want to, as Tom said you can run a DB analysis which reads the whole DB through the buffer pool.  It can also be run on a per-area basis.

The tuning challenge is not necessarily to get the entire DB memory-resident, however.  Many applications have large transaction history tables where the data is not read much if at all once it reaches a given age.  Caching that data doesn't help you much.  What you want is to have the data that is frequently accessed by the application (its "working set", if you will) in the buffer pool, to reduce your physical I/O.  Use of the Alternate Buffer Pool can also help with that, allowing you to relieve memory pressure on frequently-accessed data.

Also, you don't have to move to 11.x.  I'm not discouraging it at all but you can get a 64-bit RDBMS license in 10.2B.

I agree with your inclination not to mask application problems by throwing hardware at them.  A good place to start with application perf issues is looking at your logical I/O.  Configure your -tablerangesize and -indexrangesize database startup parameters as appropriate for your schema (higher than your highest application table and index numbers respectively) and then query the _TableStat and _IndexStat VSTs to see if you have any excessive read totals.  If so then you can drill down by user, looking at the _UserTableStat and _UserIndexStat data to determine which users are responsible for the reads.  From there, you have a variety of approaches available to find the offending code.  Good luck.  BTW ProTop is a very good tool for seeing a real-time view of the VST data during application use.

P.S. I second the suggestion about hiring a consultant.  It's easy to burn a lot of time (and money) trying in vain to solve a problem whose root cause might be much more obvious to someone who does tuning and troubleshooting for a living.

Posted by peggycole on 11-Sep-2014 01:47

Thanks for the comments and suggestions.

It feels a bit dummy that we didn't think about setting up a 64-bit server with 32-bit clients ... The other suggestions are in the line of what I was thinking, we gonna take care of that!

This thread is closed