When planning for replication I came across a strange behavior
A very small database (84 MB of data according to tablestat) seem to generate an enormous amount of after image writes.
AI log activity:
13:52:10 07/18/12 07:02 to 08/14/12 13:52 (654 hrs 50 min)
Total Per Min Per Sec Per Tx
Total AI writes 3357589 85 1.42 0.44
AIW AI writes 3063070 78 1.30 0.40
Records written 263818K 6876 114.60 35.21
Bytes written 25274080K 658706 10978.44 3372.72
Busy buffer waits 7004 0 0.00 0.00
Buffer not avail 0 0 0.00 0.00
Partial writes 188958 5 0.08 0.02
Log force waits 0 0 0.00 0.00
AI log status
After-image begin date: 03/06/12 07:01
After-image new date: 08/14/12 02:30
After-image open date: 08/14/12 02:30
After-image generation number: 553
Number of after-image extents: 6
Current after-image extent: 7
Number of AI buffers: 25
After-image block size: 8192 bytes
After-image log size: 7264 K
According to the White Paper Determining Fathom Replication Network Bandwidth (page 6).
Total number of blocks / (block size in kbytes) = number of kbytes written.
654 hours of sampled data is roughly 27 days. So 3357589 / 27 * 8 = 994 841 kbytes or circa 1 gb written per day. 1 gb written per day for a database sized 84 MB? Even with lots of changes in the database this seems a lot?
The database consists of two tables. One (taking up 90% of the datasize in the database) is completely deleted and completely recreated once a day. The other has older entries deleted daily and there are also around 5 000 entries created daily. Is this a behavior I should expect or could something be wrong?
Progress 10.2B04 running on 64 bit Suse Linux. Database block size 8192 bytes.
Sorry, blocksize actually 4096.
The volume of data written to the after-image journals is completely independent of database size. It is solely a function of the database changes being made. Every database change results in one or more records or "notes" being written to the before-image log and to the current after-image extent. When you are replicating, then the same data is also sent to the replication target over the network, along with a small amount of replication metadata.
The promon report you illustrate says on average 85 after-image blocks are written per minute. That is 85 * 8192 * 60 * 24 or 1,002,700,800 bytes. There is some overhead (extent headers, block headers, etc) so it is not all data contained in notes. Given what you say about the database activity, I am not at all surprised by this amount of after-image data.
To delete a row in a table requires at least the following notes:
- delete the row, with entire row contents
- add a rowid placeholder where the row is
- some of the time, a couple housekeeping notes to put the block on the rm chain.
- one leaf block index entry delete note for each index on the table
- some of the time, notes to delete entries at higher levels in the indexes
To add a row requires approximately the same number of notes to do the creation of the row and creation of leaf block index entries.
In answer to your question, this all looks completely normal and it is unlikely that anything is wrong. Unless your application is making unnecessary changes.