I'm using the command prostrct remove icmasuat d miscellaneoustables64 to remove a storage area from a db. It seems to remove the last extent of the area, rather than the whole area. What am I missing? Obviously I can keep on removing the last one until they're all gone but there's 200 of them! :)
Going out on a limb here, since you didn't post OE version. I know in OE 9 & 10 the remove option of prostrct command does exactly what you are seeing. It will remove the last unused extent of the area. So if you want the whole area removed, you would first need to make sure that there is no data/indexes in it (or you can truncate the area - which will effectively delete the data/indexes w/in it) and continue to do the prostruct remove until all exents are removed. Then you can rebuild the .st.
James,
Maybe a matter of terminology here. There's no utility to remove a storage area per se.
Storage areas contain database objects (tables, indexes, lobs)
And are made up of one or more extents.
If you're removing the entire Storage Area,
first truncate the area to move the HWM back to the first extent: "proutil dbname -C truncate area [area-name]"
( a neat trick to delete all data in a storage area btw leaving the objects in state for the next data load )
proutil dbname -C truncate bi
Rebuild only the primary indexes for the table(s) in the area truncated (fast, there's no data)
Delete the related table/indexes from the Data Administration tool.
Truncate the bi file again.
Remove all the extents of an Area (yes, one at a time in decreasing order which should of course be scripted ;)
hth
Stephanie is correct. The command is working as designed.
i=0;while ((i<200));do prostrct remove icmasuat d miscellaneoustables64;((i=i+1));done
or something close to that...my syntax may be off a bit.
Thanks folks :)
why in the world would you have 200 extents in an area? that's so wrong!
[mention:9617a07f61934bc98f6fccb3b7fabfae:e9ed411860ed4f2ba0265705b8793d05] guess why I'm removing them! :)
As a way of explaining, it's a 150GB area made up of extents that are smaller than 2GB. Most a lot smaller. It's a really bad structure. I'm moving stuff around to more sensible areas.
On that note, the bit that is taking the time is the index rebuild at the end. My process is as follows. Does anyone have any tips on something I could do differently (OE11.2.1)?
Binary Dump Data
Drop tables
Add new areas
Create df and move stuff to the new areas needed
Remove the now surplus storage area
Load the binaries
Rebuild indexes
Everything is scripted and works beautifully, it's just the index rebuild that is taking forever.
I would not truncate the area or drop the table or remove the extents until *after* the load and index rebuild are successful.
(Trick: rename the table after dumping and prior to loading the .df)
As a way of explaining, it's a 150GB area made up of extents that are smaller than 2GB. Most a lot smaller. It's a really bad structure. I'm moving stuff around to more sensible areas.
On that note, the bit that is taking the time is the index rebuild at the end. My process is as follows. Does anyone have any tips on something I could do differently (OE11.2.1)?
Binary Dump Data
Drop tables
Add new areas
Create df and move stuff to the new areas needed
Remove the now surplus storage area
Load the binaries
Rebuild indexes
Everything is scripted and works beautifully, it's just the index rebuild that is taking forever.
Thanks [mention:7dc6ee8d33b54aa696ccb0bdd581e3b1:e9ed411860ed4f2ba0265705b8793d05] I'd forgotten about those new options. I'm not using them.
[mention:dc2ff39fa15940708b78017f1194db6a:e9ed411860ed4f2ba0265705b8793d05] I assume your process is so you can revert to previous behaviour in case of a problem easily?
Bingo.
NEVER destroy data in a dump and load until *after* the load is successful and verified.
Note that on HPUX actual memory use (the -TF parameter) is limited by kernel param maxdsiz_64bit. Increase it above the default 4 Gb if you want the idxbuild to use more than 4 Gb of RAM.
James,
I think trying to do maintenance within a single DB is making your life more challenging. As Tom pointed out it also makes a back-out plan more challenging. Also, starting fresh gives you the opportunity to revisit DB design decisions about area RPB and BPC that are perhaps years old and no longer fit the data.
I'd just design a new structure, taking into account what you've learned from the present application environment's static analysis and run-time data, and build a new DB. You can do the new area assignments in your .df and load data, or you can load your existing schema without area assignments and run a tablemove script to put tables and indexes in their new homes. This runs very quickly when there's no data.
Then:
- binary dump from your source
- binary load to your target
- load _user, seqvals, and SQL permissions if necessary
- back up
- idxbuild (with all the new performance goodies)
- back up
and you're good to go. That way you don't have to mess with extent removal, area truncation, etc. Then once you validate your load (analysis of log files, comparison of tabanalys output, etc.), blow away your old DB if you like.
P.S. This isn't the dev DB in the external USB2 enclosure, is it? :-/
James,
I think trying to do maintenance within a single DB is making your life more challenging. As Tom pointed out it also makes a back-out plan more challenging. Also, starting fresh gives you the opportunity to revisit DB design decisions about area RPB and BPC that are perhaps years old and no longer fit the data.
I'd just design a new structure, taking into account what you've learned from the present application environment's static analysis and run-time data, and build a new DB. You can do the new area assignments in your .df and load data, or you can load your existing schema without area assignments and run a tablemove script to put tables and indexes in their new homes. This runs very quickly when there's no data.
Then:
- binary dump from your source
- binary load to your target
- load _user, seqvals, and SQL permissions if necessary
- back up
- idxbuild (with all the new performance goodies)
- back up
and you're good to go. That way you don't have to mess with extent removal, area truncation, etc. Then once you validate your load (analysis of log files, comparison of tabanlys output, etc.), blow away your old DB if you like.
P.S. This isn't the dev DB in the external USB2 enclosure, is it? :-/