Hi, everyone,
I'm looking for the best way to migrate a database from HP-UX to Linux RedHat. The size of this database is ~4 TB, the number of tables is more than 200. It requires minimal downtime, no more than one hour in the most extreme case.
For today, I came to the conclusion that the optimal solution would be ProD&L.
I would like to know your opinion on this task and will be grateful to any helpful tips.
Andriy.
I have been known to be wrong and I haven't tried it this week so my memory is a bit dim but, as I recall, index 0 will not multi-thread.
For such a large DB I would suggest you use a selective binary dump (dumpspecified) for static/archived data and load as much as possible in the new DB before you start downtime.
Thank you, Steven!
I thought about this method, but tests show the impossibility of using it due to downtime. In addition, not all large tables have fields with a date datatype.
Andriy.
In my experience DUMP SPECIFIED is really, really, really slow. You'd probably be better off hand delivering stone tablets.
How large is the largest table? That is probably the table that will determine the minimum downtime.
Are the tables in type 2 areas?
The largest table is ~1.1TB. All tables in type 2 areas.
Now I asked to refresh tabnalys. As soon as I receive it, I will have more actual info.
The fastest way to dump the table is probably a binary dump with index 0.
If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l.
A lot will depend on what you have available for HW resources.
Do you have a horizontal table partitioning license available?
[quote user="ChUIMonster"]
The fastest way to dump the table is probably a binary dump with index 0.
If you can get that done reasonably close to your down time window then you might be able to do a more or less "normal" d&l.
A lot will depend on what you have available for HW resources.
1. Run binary dump in parallel with multiple tables.
2. Copy the dump files to another server.
3. Load these files to a new database without build indexes.
4. Run idxbuild.
I think that it will still take more time than I will have.
What you mean by "available for HW resources"? HDD, CPU, RAM?
[quote user="ChUIMonster"]
Do you have a horizontal table partitioning license available?
Unfortunately no.
But it's interesting, how can this help to migrate to another server?
No, we do not have a license for table partitioning.
But it's interesting, how can this help to migrate to another server?
The effectiveness of multi-threading the dump varies. But you should try it.
Horizontal Table Partitioning would give you a simple way to do it over time - sort of like pro d&l only better:
224 - Nirvana v3.pptx pugchallenge.org/.../224 - Nirvana v3.pptx
(You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.)
You could also probably do it with CDC.
[quote user="ChUIMonster"]
(You'd have to write some 4gl code to sweep stuff as it moves from one partition to the other.)
You could also probably do it with CDC.
[quote user="ChUIMonster"]
The effectiveness of multi-threading the dump varies. But you should try it.
[/quote]
I will try it.
But does index 0 let to use a multi-threaded dump?
[quote user="mfurgal"]
I have been known to be wrong and I haven't tried it this week so my memory is a bit dim but, as I recall, index 0 will not multi-thread.
We (WSS) took the CDC approach for the DBA Challenge in Prague. It worked quite well.
Basically we enabled CDC and grabbed online backup. Restored the backup and dumped from that backup and loaded into the new structure. While that was happening we had CDC capturing changes. We loaded the changes and when everything was all caught up switched database.
Total down time was miniscule.
There was some 4gl coding required to dump the captured changes and then load them into the target.
We did mess up slightly and forgot to grab sequence values.
Threads of binary dump gets the sub-set of the unique index keys stored in root index block. Binary dump with -index 0 does not use the indexes, so it's not multi-threaded.
> Threads of binary dump gets the sub-set of the unique index keys stored in root index block. Binary dump with -index 0 does not use the indexes, so it's not multi-threaded.
It is good to know this as "proutil db -C dump table . -threadnum 8 -index 0" won't throw a warning or error. It will just silently do a single-threaded dump.
Also, "-index 0" remains undocumented. :(
[quote user="mfurgal"]
[/quote]
This is very similar to my case. Thank you! I'll consider it.
In addition, we have a license for Pro2Oracle, should we separately buy a license for Pro2OpenEdge?
It seems to me that ProD&L does almost the same thing, but it costs several times cheaper.
What do you think about ProD&L? Is it still supported? Maybe there are some nuances with him?
P.S.
Yesterday I sent the same post, but received a message that it should be checked by the forum moderator. Unfortunately, it seems that until now my post has not been approved. This is strange...
[quote user="ChUIMonster"]
We (WSS) took the CDC approach for the DBA Challenge in Prague. It worked quite well.
Basically we enabled CDC and grabbed online backup. Restored the backup and dumped from that backup and loaded into the new structure. While that was happening we had CDC capturing changes. We loaded the changes and when everything was all caught up switched database.
Total down time was miniscule.
There was some 4gl coding required to dump the captured changes and then load them into the target.
We did mess up slightly and forgot to grab sequence values.
I don't remember what CDC does about tables without unique keys.
IMHO such tables ought not to exist ;) If they do exist the responsible designer always seems to have fled the scene of the crime...
Apparently you have a table without a unique key? Is it a large table? Or something small and readily dealt with as an exception?
If / when you talk with Progress about Pro D&L and your allowable downtime, you need to state that some of your tables don't have a unique key.
[quote user="ChUIMonster"]
I don't remember what CDC does about tables without unique keys.
IMHO such tables ought not to exist ;) If they do exist the responsible designer always seems to have fled the scene of the crime...
Apparently you have a table without a unique key? Is it a large table? Or something small and readily dealt with as an exception?
Yes, there are such tables in the database. Five of them have a small size from 2.5 GB to 3.2 GB. The size of one table is 34 GB. But this is not a problem, I think I will deal with them through a binary dump. But there is one table with a size of almost 900 GB...
It seems to me that the Pro2 is the preferred choice.
Many thanks to all for your advice!
Hi everyone,
Perhaps i missed something on binary Dump/Load (DumpSpecified or Index 0).
At the end of the load, you must rebuild your indexes.
How many time will it take to do it, it think it will be more than one hour.
(The size of this database is ~4 TB WITH a downtime less than one hour.)
If you end by a binary load (partial or full table) you must rebuild the indexes of the whole table.
On the Dumpspecified, you can specify only the first field of an index (perhaps not enough, a partition could have 16 fields)
And if you are using a binary load after a several dumpspecified with the option "build indexes" , you will delete all the index entries you have build before => you must rebuild the indexes of the whole table.
I think this is the reason why Pro2 and the "CDC" are ended by capturing the changes and then loading them into the target by some 4gl coding (or something like).
IMHO , CDC is the next "level" of Audit option (no licence ? + 10.2B08 ?), it is a little bit more complex to use than CDC but you could do the job with this option.
IMHO , Unique keys / without unique keys :
- Be careful some Unique keys could change during the life cycle of a record
- if you have a table without unique keys but few records (less than 10 for eaxample) for each index entries you can work with multiples records (delete alll records for this entry in the new DB, load of all the record for it from the old Db to the new DB).
True, you will need to rebuild indexes if you do a traditional dump & load.
For what it is worth... I frequently run into people who assume that they cannot dump & load because it will "take too long". But if you start asking questions about how long their window really is and what the circumstances really are I often find that it can actually be done much more reasonably than they think.
In this case we are hearing that there is a one hour window. That may be a rock solid, no way it can be any longer limit. I don't know. But lots of people say stuff like that when they think a dump & load requires a 3 day weekend. When you can show them that it could actually be done in 4 hours they often find that maybe they could get a bit more time by asking nicely.
Hi,
You are both right.
I understood perfectly well that 1 hour (this time defined by business) in the standard implementation of migration using binary dump&load would not be enough. That's why I considered ProD&L first. But after reading the presentation of Mike Furgal, I decided that Pro2 would be better. Especially, as I found out today, we have a Pro2Enterprise license, not only Pro2Oracle, as I thought before. So we do not need to buy something extra for Pro2Pro.
Now I must learn the features of Pro2 in detail, make some tests and wait for start project, when my bosses ask about it. I think I still have a lot of time to prepare.
Once again thank you all for your participation!
Andriy.
If the table does not have a unique index, you can easily create one. Create a sequence. Create an integer field or int-64 if that is a very big table. Then "create trigger" that will assign that field a sequence number and you have a unique index. I did this trick 10 or 12 years ago specifically for Bravepoint's Pro D&L. Still works, no problems.