This is not a question, but sharing experience.
I am deleting records from one rather big table that has only one index and it is a primary unique index.
The deletion goes like that
First 1/2 million records deleted in 3 min, then same amount in 7 min, then in 12 min, 16 min, 21 min, 24 min, 30 min etc
So basically the more we go the slower the deletion goes. In couple hours process goes 10 times slower than when it started. That is not surprising as I am damaging the only one unique index that I have as I go leaving place holders in the index.
There are many ways to improve:
Put table in it's own area -- no the table is not that big ( just 100 million records) and I am deleting not all records, just one division.
The other way -- go and buy the partitioning license ... sure. Some day.
George recommended creating at least one non-unique index in every table. That will sure sounds like a plan. But will not work for today.
Finally I started doing idxcompact while I am deleting records. Every 20 min from cron the index is compacted. The deletion process was running that way 5-6 times faster then without idxcompact in parallel !!!
And idxcompact itself is running each time faster and faster as there is not much to compact. It started from 7 min and then gradually went down to 30 seconds run each time.
I have never thought of using idxcompact that way -- as a machine gun
a shared experience. If creating a unique index requires a new column with a generated Unique ID, adding that column will likely fragment a significant number of records and should run a D&L after. Table or DB analysis will show a large increase in fragments.
Adding a new column can result in the record's fragmentation though it depends from the free space available in the RM blocks and from the area's RPB. But how it relates to the indexes (unique or not)?
we didn't have unique rows in a few tables so we had to add a column to hold a GUID before we could add a unique index.