OE 10.2B08 Windows
How is it possible that my probkup file is 17 GB and my binary dump is 33 GB?
Table analysis confirms 33 GB of data.
db.lg confirms a successful probkup of 17 GB.
Paul
I didn't mention the backup was created with -com.
More information. I restored the backup to a tmp directory. There are 2 type 1 storage areas: data and index. data grew to 45 GB and index grew to 6 GB. I ran DBanalys against the newly restored DB and same info: 33.9 GB of data and 3.8 GB of index. No LOBs in the DB.
There is one big table with 23.5 GB of data in it per DB Analysis. There is nothing special about the table. Could it be a deleted field in the middle of the record or something that somehow gets compressed by the probkup -com? Let me try an ASCII dump just for fun.
Paul,
Compare the size of the template record and min record size reported by dbanalys.
I guess that new fields were recently added to the table with the existent records. The (default) values of these fields are not in database. Hence they are not in backup file. These fields are added when binary dump (or ASCI dump) reads the records individually.
Update:
proutil sports -C dbanalys
DATABASE SUMMARY Records Indexes Combined NAME Size Tot % Size Tot % Size Tot % Total 501.3K 77.5 145.8K 22.5 647.2K 100.0
The size of backup with the com option can't be smaller than the combined size reported by dbanalys.
Probkup with/without the -com:
Backed up 953 db blocks in 00:00:00
vs dbanalys:
207 RM block(s) found in the database. 166 index block(s) found in the database. 498 free block(s) found in the database. 0 index table block(s) found in the database. 1 sequence block(s) found in the database. 67 empty block(s) found in the database. 1025 total blocks found in the database.
The difference between total blocks in db and the backuped blocks is 72 - it's mainly the empty blocks.
What is the difference in your case?
The one big table has 9.5M records. Total size 23.5 GB. Mean 2632. Min 2576 Max 7043.
Template record is 2536 bytes.
From DB Analysis:
40 free block(s) found in the database.
0 index table block(s) found in the database.
1 sequence block(s) found in the database.
26 empty block(s) found in the database.
12522036 total blocks found in the database.
As for the backup (I'm translating from French):
12 522 005 active blocks out of 12 801 043 will be backed up
The backup will require 49.0 GB
...
130 445 blocks written for a total of 16.9 GB
The backup completed successfully.
And 16.9 GB is also the real size of the backup file, isn't it? Or it's only what probkup reported?
Db blocksize is 4K. Each block stores only one record that uses in the most cases just a bit more than a half of the block space. Table seems to have a lot of the array fields. Probkup does not have any chances to create the output file smaller than table size. Maybe it's the aliens who create the space anomaly in backup file? :-)
OMG if GeorgeP is looking to aliens then I'm in trouble.
Yes the backup file is 17ish GB.
I am running a probkup on the new, after-dump&load DB which has type 2 storage areas.
BTW, 8K block size would save a half of disk space for this database.
Ooookaaaaay...probkup -com of newly loaded,type II SA DB is....56 GB.
What a scary case! ;-)
Factoid: PROBKUP has a limit of 65535 backups at which time the counter overflows and your backups are no longer any good. This may be why probkup after-dump&load DB looked better :)
We've asked for this to be included in the Documentation, in the meantime there's Article:
000068748 - How many online probkups can be taken?
knowledgebase.progress.com/.../How-many-online-probkups-000068748
65535 backups with one backup per day => 180 years. I don't believe there are such old Progress databases. ;-)
OTOH once per hour would run into it in 7.5 years -- and I have heard of people who do stuff like that.
Yes, I too have heard of some who use online backup in place of ai taking online backups every hour and accepting 1 hours worth of data loss.
and an incremental backup strategy would also weigh in here. a particular customer who hit this was running 4hour full and half hour incrementals with overlap (in place of ai) so 4 years. worth knowing about in any case.