Cannot change data due to corrupt index

Posted by nix1016 on 12-Apr-2017 02:19

I'm having some issues changing data in a particular field that is a part of an index. The data in this field is 'corrupt' due to it being imported as codepage 1252 into a UTF-8 DB.

This is what's in the field right now: d’un Lion et d’un Tigre’’ 

I tried to run a codepage-convert on or just to clear out the field and I'm getting the SYSTEM ERROR: Index <idx name> in <tablename> for recid 1637212 partition 0 could not be deleted. (17630).

I've tried idxfix as per this KB to no avail, is there anything else I can try short of deleting this record?

Posted by nix1016 on 12-Apr-2017 19:05

Thanks for your suggestions. We use -cpinternal UTF-8 & -cpstream UTF-8 but imported in the CSV without any conversion which would have been in 1252, so the european characters came in incorrectly, hence we have records where the indexed field begins with a gibberish character like ȴude instead of Étude.

I have tried idxfix again using the -pf and 1 & 2 separately but still no go. I've also tried changing the field inside the program using -cpinternal 1252 (its usually UTF-8) and I'm getting bfx: field too large for data item. Try to increase -s. (42) even before I can get to the screen to change it. Tried increasing -s from 800 to 16000 and still get this error.

Anyway, I've figured out a way to fix this by inactivating the indexes that contains the corrupted field, fixing the data and re-activating them again. Seems to be working ok now, just a bit of a bummer it had to be done this way!

All Replies

Posted by e.schutten on 12-Apr-2017 04:25

Did you use the correct code page when using idxfix?

Did you do option 1 "Scan records for missing index entries" and after that option 2 "Scan indexes for invalid index entries" with the option to fix the index set to Yes?

The reason for this is that I sometimes had the problem that option 3 "Both 1 and 2 above" in some cases does not solve the index corruption. I don't know why this is.

You could use option 6 "Delete one record and it's index entries" of idxfix.

Posted by e.schutten on 12-Apr-2017 04:32

So I mean:

proutil <dbname> -C idxfix -pf <pf file>

The pf file should have the same code page AND collation as the database.

Posted by Garry Hall on 12-Apr-2017 08:25

I don't think you need to rebuild the index at all. I think you just need to modify the field value using a client with the same -cpinternal that originally entered it. I'd need more information on how the value got in there in the first place (-cpinternal of the client that inserted the record, codepage used to read the stream). This might get rather involved, in which case I'd suggest contacting Tech Support.

Posted by nix1016 on 12-Apr-2017 19:05

Thanks for your suggestions. We use -cpinternal UTF-8 & -cpstream UTF-8 but imported in the CSV without any conversion which would have been in 1252, so the european characters came in incorrectly, hence we have records where the indexed field begins with a gibberish character like ȴude instead of Étude.

I have tried idxfix again using the -pf and 1 & 2 separately but still no go. I've also tried changing the field inside the program using -cpinternal 1252 (its usually UTF-8) and I'm getting bfx: field too large for data item. Try to increase -s. (42) even before I can get to the screen to change it. Tried increasing -s from 800 to 16000 and still get this error.

Anyway, I've figured out a way to fix this by inactivating the indexes that contains the corrupted field, fixing the data and re-activating them again. Seems to be working ok now, just a bit of a bummer it had to be done this way!

Posted by cverbiest on 13-Apr-2017 01:56

[mention:eae80772debb472b8648d45ac39762c6:e9ed411860ed4f2ba0265705b8793d05] we occasionally see the same issue and use the same fix. Your post should be the accepted answer, including the bummer,  deactivating, and reactivating the index is painful.

I think progress the error should be replaced by "WARNING: corrupt index entries for record 1637212 were fixed",

or "Invalid data in <field-name> for record 1637212, fix data" and allow the data to be fixed without deactivating the index.

found following related defect

Defect # PSC00356359

Defect Type Defect

Defect Status Reported to Development

Defect Description

ABL Code is triggering 17630 or 1422 error but no index corruption found in database.

Posted by Garry Hall on 13-Apr-2017 13:06

Note that PSC00356359 is a very specific corner case, related to a form header using a UDF, and trying to display the duplicate index entry error when output is directed to a file. Not all causes of (17360) and (1422) are caused by PSC00356359.

I had planned to test with the information provided by @nix1016, to see if I can identify why deleting the row from a UTF-8 client does not resolve the issue. I would prefer a better solution than deactivating and rebuilding indices.

Posted by Garry Hall on 13-Apr-2017 14:26

I could not reproduce the (17360) error with the information provided. I loaded with -cpinternal UTF-8 and -cpinternal 1252, and once it was loaded, I was able to delete from either -cpinternal UTF-8 or -cpinternal 1252. There are a number of unknowns, but since you have a resolution, it is not worth tracking these down. FWIW, it does seem that at least part of the field is UTF-8 data that was loaded as if it was 1252. I suspect the data is somewhat malformed, as the two U+2019 Right Single Quotation Mark chars at the end are encoded differently (in 1252) to the same chars which should have appeared in d’un but got converted to ’ instead.

Posted by nix1016 on 17-Apr-2017 19:47

Yes, I think if the deformation happens right at the start of the field is when you get the problem. We were able to change some records with gibberish characters just not the ones where the gibberish is right at the start which is where it utilises the index. I think the issue was that the data was imported using -cpstream UTF-8 instead of 1252 which is what the .CSV was in. The -cpinternal is always set correctly to UTF-8. But as you mentioned it's not really worth pursuing as there's a workaround (albeit a painful one!). Thanks for your help!

This thread is closed