DBTOOL width fix - takes ages (never used to)

Posted by Admin on 17-Aug-2011 06:12

I have been using dbtool to fix SQL field width errors for quite a while.

I expect a table with a lot of records in to take a reasonable amount of time.  I have seen 10 threads processing 4m records from 1 table take 10 minutes.

All of a sudden it takes an age, even for tables with a handful of records (e.g. 27) to the point where I think it's stuck in some sort of loop.  It's continues to read from the disk (can see the IOPS) so it's definitely doing something.  Surely 27 records should be complete within a matter of seconds?

Anyone experienced anything similar/got any tips?

Thanks, Chris

OE10.2b02, and the version hasn't changed recently.

All Replies

Posted by Admin on 17-Aug-2011 07:31

OK this only appears to only happen on our production and test environments (test is a fairly recent copy of production).  Running DBTOOL on the development DB (it has a similar amount of records) and it wizzes through within a couple of minutes.

Posted by Richard Banville on 17-Aug-2011 08:04

I have not seen or heard of such an issue in the past however you have peaked my interest.  Any idea what could be different between the 2 databases?  I'd be happy to get a bug logged for this if there is a reproducible case you can share with us.

Posted by Admin on 18-Aug-2011 11:37

I really aren't sure to be honest .  Nothing in the logs to suggest any errors (unless there are other places I can look other than the .lg file).

What I do know is that the disks are in use a lot, and DB reads from disk are high when it's running.  I do know from dtrace that dbtool is reading from the db extents (it's solaris here).

I'm just getting some iostat data to compare like for like and see if it's related to disks or DB layout.  Dev & Test are on the same server so the only thing that is different is the DB itself.

I'm a bit confused as to why it needs to do all the DB reads for one table with 30 or so records in it?  But then I don't know what the db tool is doing behind the scenes - could you shed any light?!

Everything is in the schema area (bad I know and something we need to sort).  The main difference between the actual databases is size and content of one table (not the one I'm fixing the width on).  One DB is 13gb and the other is 75gb.  A large chunk of that 75gb (lets say 40-50%) is documents (pdfs, images etc) stored in raw format.

Any help much appreciated.

In the mean time, is it possible to manually update the width behind the scenes?

Thanks,

Chris

Posted by Richard Banville on 18-Aug-2011 11:57

You can manually update the sqlwidth via the ABL or the data dictionary.

I'll shoot you off a code example later this afternoon but you basically need to update the asociated _field._width value in the schema.

dbtool scans the entire area looking for data associated with a particular table.  It does not scan by index.

If in a TI area, all records of all record blocks are examined due to the organization of record data in a TI area.  If the record is not from the table requested, then after initial examination to determine that, dbtool will skip the analysis of the record and move on to the next record.

If in a TII area, only those records associated with table are examined.  Blocks are scanned a cluster at a time.  If upon first examining a cluster it is noticed that the cluster is not for the table requested, the entire cluster is skipped and the next one examined.  In this way, dbtool will process the same sized area much quicker in a TII area than in a TI area (when there is data from more than one table (index) in the area that is).

Posted by Thomas Mercer-Hursh on 18-Aug-2011 12:02

I.e., one would produce a massive performance boost for dbtool on one table by switching to type 2 areas....

Posted by Admin on 18-Aug-2011 12:30

I was leaning towards a whole read of the DB, looking at the stats. 

It must just always have taken this long and I had never noticed.....or at least not taken as long on other production databases.

We definately need to move away from T1, and this gives me more reason to spend some time on sorting it out.

A quick script would be very handy for now - can I update this with users connected?

Thanks for your help.

Chris

Posted by Thomas Mercer-Hursh on 18-Aug-2011 12:42

Even if you were to scan all tables, I'll bet type 2 areas would make a big boost ... and they would help your performance all the time, not just for dbtool runs.

Posted by Richard Banville on 18-Aug-2011 14:42

ID: P26983
Title: "How to use 4GL to adjust the SQL Width Property Setting of a table field?"

I always thought yuo could do this online but apparently you need exclusive access to the schema.

Anyway you can try it  out.

Another way to do it (the solution is linked to the above solution)

ID: P128368
Title: "How can I modify a column's SQL-WIDTH without using the Data-Dictionary tool?"

You can also use SQL directly:

ALTER TABLE

ALTER COLUMN SET PRO_SQL_WIDTH  ;

ALTER TABLE pub.customer ALTER COLUMN name SET PRO_SQL_WIDTH  1000;

Again howver, exclusive schema access is required.

This thread is closed