Hey Guys,
I'm working in a project making a change of structure from databases from an environment with 34 db's.
But after do a Dump/Load, my databases are consuming a lot of memory from my machine and swapping and also consuming more I/O (with the same parameters).
My question is:
- Maybe the change of records per block causing this issue?
- Later I was working with 256 records per block and now I'm working with 128 records per block for areas that contains a lot of tables without standard.
I guess the root case is a wrong choice of the indexes used for the dump.
Did you change the block size of your databases ? If you went from 4k to 8k then your -B parameter would allocate twice as much memory.
No @cjbrandt , I didn't change the blocksize (8k).