Dump/Load Millions of records

Posted by LegacyUser on 17-Aug-2001 20:59

Hi,

I need dump and load around 15,000,000 records and i want to know what is the way fastest.

I have progress 8.3c on unix

Thanks

All Replies

Posted by LegacyUser on 23-Aug-2001 08:42

Hi Oscar,

It really depends on a number of factors;

"Oscar " wrote:

>

>Hi,

>

I need dump and load around 15,000,000 records and i

>want to know what is the way fastest.

>

>I have progress 8.3c on unix

>

>Thanks

Posted by LegacyUser on 23-Aug-2001 08:42

Hi Oscar,

It really depends on a number of factors;

"Oscar " wrote:

>

>Hi,

>

I need dump and load around 15,000,000 records and i

>want to know what is the way fastest.

>

>I have progress 8.3c on unix

>

>Thanks

Posted by LegacyUser on 23-Aug-2001 08:45

Hi, i'll get it all typed this time !

It really depends on a number of factors;

how many tables, how many processors, how much data, how much free disk space, etc.

When we dump & load 4 gig db we use 16 scripts to dump the data and then use a bulkloader followed by index rebuild.

"Oscar " wrote:

>

>Hi,

>

I need dump and load around 15,000,000 records and i

>want to know what is the way fastest.

>

>I have progress 8.3c on unix

>

>Thanks

Posted by LegacyUser on 29-Aug-2001 17:31

Thanks Keith,

I did the dump/load similary like you said, but now my problem is the "idxbuild" the problem is not the space, the problem is the time. I have one table with around 50 millions of records and the reindex has around 40 hours and is not finish yet.

Thanks.

"Keith" wrote:

>

>Hi, i'll get it all typed this time !

>

>It really depends on a number of factors;

>

>how many tables, how many processors, how much data, how

>much free disk space, etc.

>

>When we dump & load 4 gig db we use 16 scripts to dump

>the data and then use a bulkloader followed by index

>rebuild.

>

>

>

>"Oscar " wrote:

>>

>>Hi,

>>

>> I need dump and load around 15,000,000 records and i

>>want to know what is the way fastest.

>>

>>I have progress 8.3c on unix

>>

>>Thanks

>

Posted by LegacyUser on 12-Nov-2001 19:21

Hi Oscar,

In search of a faster way to dump and reload a

Progress database, I used binary dump/load on a

869Mb test db to verify/benchmark if it's really

faster than the regular dictionay dump/load.

Based on the result (Please see the tally below)

Binary dump/load process is 300% faster than the

dictionary dump/load. However the problem is the

size gets bigger.

Original database size: 869Mb

Reg. Dump: 39 mins

Bin. Dump: 15 mins

Reg. Load: 4 hrs

Bin. Load: 12 mins

Reg. Indexing: 1 hr 5 mins

Bin. Indexing: same

After Reg dump & load, DB size: 800MB

After Bin dump & load, DB size: 910MB

Regards,

Ompong Paguirigan

Posted by LegacyUser on 19-Nov-2001 07:14

Question re the speed - how many dump / load processes did you run concurrently ?

Is the increase in size caused by the sequence of tables being loaded increasing unused parts of blocks ?

"Ompong Paguirigan" wrote:

>

>Hi Oscar,

>

>In search of a faster way to dump and reload a

>Progress database, I used binary dump/load on a

>869Mb test db to verify/benchmark if it's really

>faster than the regular dictionay dump/load.

>

>Based on the result (Please see the tally below)

>Binary dump/load process is 300% faster than the

>dictionary dump/load. However the problem is the

>size gets bigger.

>

>Original database size: 869Mb

>

>Reg. Dump: 39 mins

>Bin. Dump: 15 mins

>

>Reg. Load: 4 hrs

>Bin. Load: 12 mins

>

>Reg. Indexing: 1 hr 5 mins

>Bin. Indexing: same

>

>After Reg dump & load, DB size: 800MB

>

>After Bin dump & load, DB size: 910MB

>

>Regards,

>Ompong Paguirigan

This thread is closed