Double jeopardy when a NO-UNDO temp table violates a uniquen

Posted by dbeavon on 10-Dec-2019 16:40

I'm looking for help with TT's (of the NO-UNDO variety).  Does anyone have experience with a TT that raise an error twice for the same problem?  I'm struggling with this.

I found a KB article that describes the basics of the problem.  It has workarounds but they are are truly strange and I'm hoping someone has a better approach that incorporates modern S.E.H.

https://knowledgebase.progress.com/articles/Article/000037277

In my programs I have S.E.H. in place and I catch the ProError related to the failure (unable to add a duplicate record):

** TT_Item already exists with  "A123"  55951730  50843491  1. (132)

For example, the following catch block will start executing after the unique constraint violation.

      /* ********************************************************************* */
      /* Convert ProError's and AppError's                                     */
      /* ********************************************************************* */
      CATCH v_ProError AS Progress.Lang.ProError:
         
         
         /* Empty dataset */
         /* TEMP-TABLE TT_AllocatedItem:EMPTY-TEMP-TABLE(). */ /* !!!! RAISES ERROR AGAIN !!!! CANNOT PROCEED WITH ERROR HANDLING */
         /* DATASET DS_ItemAllocation:EMPTY-DATASET(). */
         
         /* Return the error */
         p_ErrorAllocation = ParseStandardErrorMessage(v_ProError).
         RETURN.
         
      END CATCH.

After I've caught and started handling the constraint violation, however, it will be raised again - either in the CATCH block or in the calling client program that shares the static TT and DS.

Note that I've even tried to empty the temp table and dataset as part of my error handling. (currently that is commented out in my example above).  But those attempts to empty the dataset will have the unintended effect of immediately raising the constraint violation again ... and prevent my CATCH block from completing!!!

This is very strange behavior and it is the first time I've ever encountered it.  The workaround in the KB suggests that there is no alternative than to make the TT an UNDO-able one.  This seems extremely unhelpful since I do not want the overhead of the UNDO functionality on the TT.  Is there any other approach?  Shouldn't I be able to clear/reset the static data in the CATCH block without triggering yet another error???

Posted by Laura Stern on 12-Dec-2019 17:11

Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

Posted by ske on 12-Dec-2019 15:53

> The fact that I can't seem to avoid the raising of the error is the problem.

If you assign another unique value to the field with the constraint after the error was first raised, then it will stop raising any more errors...

(Or delete the record, as others have said.)

If you feel uncomfortable deleting or changing the record blindly when handling the error in the calling program, you could guard it with e.g IF NEW <tablename>. The record remains NEW longer than usual, since it wasn't inserted into the database due to the error. (At least it does in other very similar cases that I tried.) If it is not NEW, some other error must have occurred.

I don't understand, though, why you would not like to handle the error (delete the record) already inside the called program where the error occurred. Doing it any other way breaks the separation of concerns between the programs, as you mentioned. I would take that as a good reason for handling it there. And afterwards you can rethrow the error, if you still want to signal the caller about it too or go on to some more general error handling. Isn't that what rethrowing errors is good for? Doing whatever cleaning up that really needs to be done locally. I would put the (inner) handling as close as possible to the assignment of the field with the constraint where this particular error is expected to occur, and then that would also avoid unrelated errors from other statements landing in the same catch.

Or just add a check that your field value is unique before assigning the value to the record, if you don't want to trigger error handling. Since it is a temp-table, there should be no concurrency-issues with doing that.

Or enable UNDO for the temp-table, if you really want the system to get rid of the failing record automatically for you. What is the system supposed to do with the illegal record you created, if it is not allowed to UNDO it? It will stick around until you delete it or change the value.

Letting EMPTY TEMP-TABLE ignore the illegal record in the buffer sound like a good idea, though.

Or maybe failed records should stop trying to insert themselves into the database again and again? Don't try again until some new assignment is done or an explicit RELEASE statement is encountered, and otherwise just let it be discarded when it goes out of scope.

(Another option might even be to throw away the failing record immediately when raising the error, even without UNDO, just letting the buffer become empty (NOT AVAILABLE). But I guess that would be too aggressive, given existing ABL programs that would not expect that, or that want to see what was in the buffer when handling the error.)

All Replies

Posted by bronco on 10-Dec-2019 17:44

if I delete just the record I don't get the error anymore.

define temp-table tt no-undo

 field c1 as character

 field c2 as character

 index c1 as primary unique c1

 .

create tt.

assign

 tt.c1 = "1"

 tt.c2 = "1".

create tt.

assign

 tt.c1 = "1"

 tt.c2 = "2".

catch err1 as Progress.Lang.Error :

 MESSAGE 'err...' VIEW-AS ALERT-BOX.    

 delete tt.    

end.

finally:

   MESSAGE 'done' VIEW-AS ALERT-BOX.    

end.

Posted by bronco on 10-Dec-2019 17:47

The second error that is.

Posted by dbeavon on 11-Dec-2019 18:27

Here is an extended variation of your example.   Program "EntryProg.p" calls program "HelperProg.p".

EntryProg

BLOCK-LEVEL ON ERROR UNDO, THROW.


define temp-table tt NO-UNDO 
 field c1 as character 
 field c2 as character 
 index c1 as primary unique c1
 .

/* Buffer */ 
DEFINE BUFFER B1 FOR tt.
 
/* ************************************************************************ */
/* Call the helper                                                          */
/* ************************************************************************ */
DO ON ERROR UNDO, THROW:
   
   RUN HelperProg.p (INPUT TABLE tt BIND).
    
      
   /* ********************************************************************* */
   /* Catch and clean?                                                      */
   /* ********************************************************************* */
   catch err1 as Progress.Lang.Error :
      MESSAGE 'err 2...' VIEW-AS ALERT-BOX.
      
      /*      EMPTY TEMP-TABLE tt.*/
      delete tt.
      
   end.
   finally:
      MESSAGE 'done 2' VIEW-AS ALERT-BOX.    
   end.

END.

/* ************************************************************************ */
/* Success                                                                  */
/* ************************************************************************ */
MESSAGE 'success' VIEW-AS ALERT-BOX.   

 

HelperProg

BLOCK-LEVEL ON ERROR UNDO, THROW.


define temp-table tt_help NO-UNDO REFERENCE-ONLY 
 field c1 as character 
 field c2 as character 
 index c1 as primary unique c1
 .
 
/* ************************************************************************ */
/* Params                                                                   */
/* ************************************************************************ */
DEFINE INPUT PARAMETER TABLE FOR tt_help BIND. 
 
/* ************************************************************************ */
/* The work                                                                 */
/* ************************************************************************ */
DO ON ERROR UNDO, THROW:
       
   DEFINE VARIABLE v_Loop AS INTEGER NO-UNDO.
   
   DO v_Loop = 0 TO 19:
      RELEASE tt_help.
      create tt_help.
      assign 
       tt_help.c1 = STRING(v_Loop MODULO 10)
       tt_help.c2 = "1".
   END.
       
      
      
   /*   catch err1 as Progress.Lang.Error :   */
   /*      MESSAGE 'err...' VIEW-AS ALERT-BOX.*/
   /*      /*      EMPTY TEMP-TABLE tt_help.*/     */
   /*      /*          delete tt_help.*/           */
   /*   end.                                  */
   /*   finally:                              */
   /*      MESSAGE 'done' VIEW-AS ALERT-BOX.  */
   /*   end.                                  */
   
END.


 

This still behaves as yours did.  I really don't like it.  The outer program (EntryProg) is deleting the default tt buffer as part of the catch handler.  It makes LOTS of scary assumptions about the problem that had been encountered in HelperProg.  

Eg. Assumptions

  • That the particular error which was raised was related to a duplicate item constraint violation.
  • That the default tt buffer is positioned properly
  • etc.

In other words, there is very poor encapsulation of the logic done by "HelperProg".  And there is no separation of concerns between the two programs.  In the very least it would be nice if "EntryProg" would be able to use the EMPTY TEMP-TABLE statement, rather than assuming the type of error and the positioning of the default buffer!

Note in the code above that I do NOT wish for HelperProg to handle its own errors.  Otherwise that would have been a much better place for your "delete tt" if it were absolutely necessary to make all the assumptions about the nature of the error. 

Posted by bronco on 11-Dec-2019 19:49

You can always fine-grain the handling by checking the error number of the error object...

Posted by dbeavon on 11-Dec-2019 21:59

>>You can always fine-grain the handling ...

I really don't want to...  I just want to step out of the mess, and show the underlying error message to the user (whether the error is related to a unique constraint error or something totally unrelated.)   But I cannot easily escape this portion of the callstack without first micro-managing the data in the temp-table.

Notice that "EntryProg" shouldn't have to know about (or care about) the specific work that is being done within "HelperProg".  It just cares if it succeeded or failed.

Another programmer who reads "EntryProg" will be scratching their heads about why there is a step to "delete tt".  (Granted that emptying the entire temp-table wouldn't be a major improvement, but it wouldn't involve micro-managing records or making an assumption that the current/default buffer is important in some way.)

Are there any methods on temp-table that might reset things so the error doesn't come back to bite us a second time?  I tried the following without luck.

     TEMP-TABLE tt:EMPTY-TEMP-TABLE ().

     EMPTY TEMP-TABLE tt.

     TEMP-TABLE tt:CLEAR().

I had pretty high hopes for the last one but it didn't work either.

Posted by Thomas Mercer-Hursh on 11-Dec-2019 22:08

Just a guess, but have you tried putting the empty TT in a FINALLY block instead of the CATCH?

Posted by dbeavon on 11-Dec-2019 22:30

The "EMPTY TEMP-TABLE" operation always raises its own error again (although I don't see why it should when "delete tt" does not).  

The fact that I can't seem to avoid the raising of the error is the problem.

I'm tempted to open a support case.  It would seem to me that an "EMPTY TEMP-TABLE" operation should supercede and trump all the constraint violations that might otherwise exist in the default buffer.  Those should be a moot point if the data is being cleared.  Alternatively there should be a command to remove or disable the constraints in the current TT or DS.  That is something I was hoping to find in the list of temp-table handle operations...

Posted by Laura Stern on 11-Dec-2019 23:49

I haven’t read the kbase entry.  So this may be redundant. But it sounds like the kbase doesn’t really explain why the error is happening again.  

You have an invalid value in the tt record, which stays there because the modification was not undone.  Any operation that would cause the record to be flushed out (like reading another record into the same buffer or doing a release), will cause the validation and thus the error to happen again, ad infinitum.  

This bug was reported before, and if my memory serves me correctly, to “fix” it, the AVM was modified to treat delete as a special case that would not complain and make the error stop.  

I suppose you have a case for the EMPTY-TEMP-TABLE method doing the same.  Though it seems to me that making the TT UNDO might be the cheaper option!

Posted by jamesmc on 12-Dec-2019 08:46

I raised the same question earlier this year and [mention:22b6240a52bb4d568001986a6aa726a8:e9ed411860ed4f2ba0265705b8793d05] gave a pretty decent explanation of what was happening...

https://community.progress.com/community_groups/openedge_general/f/26/t/57569

Posted by dbeavon on 12-Dec-2019 15:34

Thanks jamesmc.  Neither of the workarounds seems that appealing.  Switching to an UNDO'able temptable will incur unnecessary overhead.  We've gotten to the point where all our TT's are NO-UNDO by convention (similar to our memory variables).

Deleting the default record buffer in a catch block is also a very questionable workaround...  especially if you are doing it in a generic catch block ... in an outer scope that is not supposed to know about the internal implementation details of any methods that are called.

Perhaps the implementation of ABL'S "EMPTY-TEMP-TABLE" operation should be responsible for deleting the default buffer as a first step before doing the rest of the work the normal way.  That seems fairly straightforward.  And it would unburden us from micro-managing things at that level.  Or maybe there should be a way to "unbind" the temp-table, essentially flagging it as an unused and uninteresting object for the current session & program  (similar to the way a "REFERENCE-ONLY" TT is uninteresting before it is assigned a reference to the instance of another TT).  If a temp-table was "unbinded" , then ABL wouldn't need to validate it since it was abandoned anyway.

Posted by ske on 12-Dec-2019 15:53

> The fact that I can't seem to avoid the raising of the error is the problem.

If you assign another unique value to the field with the constraint after the error was first raised, then it will stop raising any more errors...

(Or delete the record, as others have said.)

If you feel uncomfortable deleting or changing the record blindly when handling the error in the calling program, you could guard it with e.g IF NEW <tablename>. The record remains NEW longer than usual, since it wasn't inserted into the database due to the error. (At least it does in other very similar cases that I tried.) If it is not NEW, some other error must have occurred.

I don't understand, though, why you would not like to handle the error (delete the record) already inside the called program where the error occurred. Doing it any other way breaks the separation of concerns between the programs, as you mentioned. I would take that as a good reason for handling it there. And afterwards you can rethrow the error, if you still want to signal the caller about it too or go on to some more general error handling. Isn't that what rethrowing errors is good for? Doing whatever cleaning up that really needs to be done locally. I would put the (inner) handling as close as possible to the assignment of the field with the constraint where this particular error is expected to occur, and then that would also avoid unrelated errors from other statements landing in the same catch.

Or just add a check that your field value is unique before assigning the value to the record, if you don't want to trigger error handling. Since it is a temp-table, there should be no concurrency-issues with doing that.

Or enable UNDO for the temp-table, if you really want the system to get rid of the failing record automatically for you. What is the system supposed to do with the illegal record you created, if it is not allowed to UNDO it? It will stick around until you delete it or change the value.

Letting EMPTY TEMP-TABLE ignore the illegal record in the buffer sound like a good idea, though.

Or maybe failed records should stop trying to insert themselves into the database again and again? Don't try again until some new assignment is done or an explicit RELEASE statement is encountered, and otherwise just let it be discarded when it goes out of scope.

(Another option might even be to throw away the failing record immediately when raising the error, even without UNDO, just letting the buffer become empty (NOT AVAILABLE). But I guess that would be too aggressive, given existing ABL programs that would not expect that, or that want to see what was in the buffer when handling the error.)

Posted by frank.meulblok on 12-Dec-2019 15:57

[quote user="dbeavon"]

Deleting the default record buffer in a catch block is also a very questionable workaround...  especially if you are doing it in a generic catch block ... in an outer scope that is not supposed to know about the internal implementation details of any methods that are called.

[/quote]

Any questionable side effects you risk introducing can be contained by keeping the CREATE in it's own DO block with it's own catch. That's pretty much the only way to be certain which buffer you need to delete the record from, and to get rid of the problematic record before any other code can run into it. 

Kind of a pain to make sure that's done everywhere though, and your code won't look any cleaner/more readable either.

Posted by Brian K. Maher on 12-Dec-2019 16:10

David,
 
Did you ever try using the -nodupttidxerror startup parameter?
 
Brian

Posted by dbeavon on 12-Dec-2019 16:51

@ske

>> I don't understand, though, why you would not like to handle the error (delete the record) already inside the called program where the error occurred. Doing it any other way breaks the separation of concerns between the programs, as you mentioned. I would take that as a good reason for handling it there. And afterwards you can rethrow the error, if you still want to signal the caller about it too or go on to some more general error handling. Isn't that what rethrowing errors is good for?

You are right that if I had to micromanage things, then I should do it in a program that is creating the TT record in the first place.   (IE the micromanagement should probably be done by a catch block within the "DO v_Loop" of the HelperProg which is contextually the part of the code that "knows best" about that particular TT record!).  However, I don't want to micromanage anything, or treat one type of error differently than another.  For that matter, creating new TT records might cause the files in the SESSION:TEMP-DIRECTORY to grow too big and cause the unlikely overrunning of the free space of the disk.  Should I be specifically checking for that particular type of an error too - while I'm at it? Sorry to be extreme but you get the idea.  I only want to CATCH business-logic-related errors and let the technical cruft take care of itself (or get handled as generically as possible with a simple user message).

What I am really after is some sort of a "generalized" convention or rule whereby I will start remembering to empty all the static TT/DS data in all of my CATCH blocks, especially when they are no longer relevant in light of the error.

@Brian

Based on the KB ( knowledgebase.progress.com/.../P43686 ) that fix appears to be related to entirely duplicate indexes.  I don't think it is related to duplicate records in a single unique index.

Posted by Brian K. Maher on 12-Dec-2019 16:58

Give it a try anyways.  What will it hurt?

Posted by Laura Stern on 12-Dec-2019 17:11

Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

Posted by Laura Stern on 12-Dec-2019 17:11

Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

Posted by dbeavon on 12-Dec-2019 17:12

>> What will it hurt?

I don't want it to work... then I have to consider the option of using an undocumented client session parameter in production.  

No, it didn't seem to work.  Here is process explorer to show the parameter was in effect in the command line:

Posted by dbeavon on 12-Dec-2019 17:23

>> what else would you do with the bad record?

I really don't mind doing cleanup work, but I was looking for a higher-level cleanup operation (CLEAR/EMPTY/UNBIND).  It seems odd if the required cleanup involves deleting the default buffer from a separate scope or from an entirely different program.  

What I am really after is some sort of a "generalized" convention or rule whereby I will start remembering to CLEAR/EMPTY/UNBIND all the static TT/DS data in all of my CATCH blocks, especially when they are no longer relevant in light of the error.

Given the feedback, I think the best option is to validate the buffer explicitly and react to errors with a local CATCH.  Or else I can write the additional logic to check for a duplicate before the record is even added (and throw my own customized error at that point).

This thread is closed