Say you are in a very large transaction and you have a create or write trigger on a table that writes to a text file. "Created blah ID:123".
Now later on in the routine a condition is met that runs UNDO, RETURN and the entire transaction is undone (desired).
How can you track what work the undo is doing? So you can log "Undo blah ID:123".
Thanks,
Justin
1. Use a no-undo variable to indicate state. It will not be undone and can be used after the block for logging.
2. Use structured error handling to pass messages back to containing blocks.
3. Use a log utility with no-undo TTs to record messages.
I get the feeling there is no way..basically I am processing a very large XML request and the program will process and create records as it goes. It may have modified and/or created hundreds of records on 50 different tables by time it hits a node with a impossible business case. It will then back out everything and throw the error.
So 1 is easy and done, if error string not blank, it must have undone everything.
For 2, there is only a single error. Ditto for 3.
I guess I am looking for a session level utility that I can switch on, tracks all table modifications until I turn it off.
I'm thinking this is going to have to be some sort of global shared trigger class that I write myself. Even if something like this did exist it would only be able to supply rowid's, not unique table keys..which would then be useless since the rowid's would no longer exist after the undo.
So, if you have done #1, what's the problem? 1, 2, and 3 are alternatives, not a sequence.
1 only tells me the block was undone. Not what was undone.
Nothing fires during a undo. I create a customer record and the trigger fires, but a missing date on the order causes the customer record to be undone (deleted). Why no delete trigger? I need something to fire saying 'the customer record has been deleted' or something! Anything!
To track detailed status, use a no-undo log table and create messages before and/or after any event you want to track. See my old http://www.oehive.org/ExceptionClass for inspiration.
For a simple solution, set your no-undo variable to "About to update customer" right before you update customer. If the customer update fails and the whole thing undos, then that message will tell you that the failure was in the customer update.
For a simple solution, set your no-undo variable to "About to update customer" right before you update customer. If the customer update fails and the whole thing undos, then that message will tell you that the failure was in the customer update.
I understand what you are trying to get at, but the event, or point of failure isn't what I care about.
I only care to know about the work that has been accomplished before the undo. What about a customer upload that loads customers from a file. A customer trigger writes to a log file. Then, half way through the load file an address is missing, your business layer catches the error and the requirement is to undo the entire transaction and you need to know what records are being undone. Seems trivial with 1 table, store updates in a no-undo temp-table, but lets throw a wrench into the mix and say a single customer record creates or modifies an untold number of other records throughout the system and all these tables write to a log file via their own trigger. Now you need to log the undo to all these records.
The no-undo log table can be used to keep as much information as you want. You could also write it to an external file. That will give you a long list of what you tried to do before the failure occurred.
Of course, you should be sure that you really want this to be one transaction.
And, there is something to be said in cases like this for processing all of the data into an intermediary form such as a PDS and then, only after having validated all of the data and produced any logs desired, then update the database from the PDS in a single brief transaction. One of the advantages of this approach is that you could elect to continue processing after an error had occurred and thus potentially detect multiple errors in a single pass.
Instead of the database trigger writing to a log file, why not make it
write to a log table instead, using the DBTASKID as a key. If the
transaction undoes, all db activity is automatically reversed out, so
there won't be any logs to write.
If the transaction completes, use the log table data to write to the
appropriate log files.
Julian
--
Julian Lyndon-Smith
IT Director, Dot R Limited
"I don’t care if it works on your machine! We are not shipping your machine!”
Follow Dot R on http://twitter.com/DotRlimited
jmls wrote:
Instead of the database trigger writing to a log file, why not make it
write to a log table instead, using the DBTASKID as a key. If the
transaction undoes, all db activity is automatically reversed out, so
there won't be any logs to write.
If the transaction completes, use the log table data to write to the
appropriate log files.
Julian
--
Julian Lyndon-Smith
IT Director, Dot R Limited
"I don’t care if it works on your machine! We are not shipping your machine!”
Follow Dot R on http://twitter.com/DotRlimited
This is close! This would be pefect for a single transaction, but now lets say I have transactions all over the system. For this I need a 'transaction complete' trigger. My goal is to not modify all 500 .p's that could start/end a transaction.
Some sort of .p that fires everytime a transaction is completed. It could then process the log table. Does functionality like this exist?
Hrmm..or a server process that monitors the log table I guess would work as well..more of a queue solution rather than a online solution though...
I don't see the problem. I have talked about how you can tell whether the transaction completed and, if it did not where it failed. That alone is plenty to allow you to write any logs or reports you want based on whether the transaction completed. If you want more detail, then use the no-undo TT approach to record detailed log messages and do what you want with those at the end. Piece of cake, in fact, to record both successful and unsuccessful activity and then decide at the end whether to post it to an error log or a success log. Put the processing in a FINALLY block if you want to be sure it will be done.
What's missing?
I totally appreciate the input and ideas. The word I keep using is 'trigger' and that word gets ignored, probably because nothing like it exists.
We keep reverting back to how we can programatically check if a transaction passed or failed. I have software with transactions all over the place and it is not feasible to add code to the end of all of them. Using a replication table or undoable temp-table of sorts that is updated by the common trigger procedure (since all tables I care about have a create and write trigger and all use a common trigger class) will give me the changes I care about, transaction pass or fail.
Now, transaction done, without modifying every program with a transaction, fire a program.
ON COMPLETE OF TRANSACTION ANYWHERE
DO:
/* insert god code here */
END.
This would be my dream trigger, does anything close to this exist?
sorry these are the kind of straws one needs to grasp at when dealing with millions of lines of code starting in the very early '90's originally programatically converted from cobol.
No, there is no ON TRANSACTION-COMPLETE ANYWHERE.
But, I don't know what magical thing you would expect it to do. If it is anywhere, what is it supposed to know about? And, wasn't your question about ON TRANSACTION-UNDO ANYWHERE?
Even if it existed, I don't think it would do anything for you. I understand perfectly what your real problem is since my business is in trying to help people in your position. There are stepwise approaches where you can make small incremental gains in code that you touch ... but, of course, that isn't going to lead to dramatic overhaul of the whole application. If there is a budget, then there are a variety of approaches to incremental or wholesale modernization ... but no magic bullets. Check out http://www.cintegrity.com/content/Model-Code-ABL and some of the transformation whitepapers on my website for ideas.
We are in the process of converting our code base to be OERA compliant, but it is a background task.
Regardless, that was the answer I was looking for.
No session trigger on complete of a transaction.
No session trigger on undo of a transaction.
No trigger on undo of a record.
A trigger on any of these would have met my requirement. The 3rd one was my original question.
We are in the process of converting our code base to be OERA compliant, but it is a background task.
Good for you. Let me know if I can provide some guidance or tooling to help you on your way. It is a very big and thus potentially very expensive job, but a clear vision of where one is going and optimum use of tools can reduce that expense dramatically.
A trigger on any of these would have met my requirement. The 3rd one was my original question.
I might suggest that there is no trigger in large part because there are a number of well extablished techniques for accomplishing the purpose in a well structured and controlable way and it is not clear that having a trigger would actually add to that capability. For example, when exactly does the undo trigger fire? When the variables still have their values or when the values have been undone? The former would be desirable if you wanted to log any offending values but would create flow of control ambiguities like, what happens if you fix the problem in the undo trigger? After the undo would be safer, but would leave one without needed information. And, how would one know what the problem was causing the undo ... presumably from the error, but then we have the CATCH and FINALLY structures for handling those nicely ... indeed, CATCH comes pretty close to being a trigger on the record undo; it just isn't an ANYWHERE trigger, but needs to apply to a specific block where you have a specific responsibility.
This is getting overly complicated.
What the OP is after is "how can I tell if a tx got backed out?"
The answer is in how a TX behaves - all db updates in a TX are either written or not written to the db.
I'd suggest using session triggers to capture the required db table change information. Part of that capture process would be tracking the db's DBTASKID and writing it to a NO-UNDO TT, as well as a db table somewhere. If the DBTASKID changes and the db table reference to the prior DBTASKID is still there, the prior TX went through, and the 'log' TT can be emptied (or what-have-you). If the prior DBTASKID isn't in the db table, the TX got backed out, and the code can then do whatever it needs to do with the tracking TT's information.
The one place this falls down is when shutting the application down because there's no "follow-on" TX activity to trigger the code to check if the prior DBTASKID got committed or not. Something would need to be written to force the trigger code to do a final check on the last TX before the application closed.
For a single-db application, this is the course I'd recommend taking.
(BTW - I'm also a consultant and help people with issues like this for a living. )
Hi Tim,
Ya this is the type of approach I have taken. I handled the final transaction by adding a check to the objects deconstructor.
This will have to do. Only other option is a queue approach with another process monitoring a table. Current solution allows me to only utilize temporary space. One transaction behind, hopefully no one notices!
Seems more complicated and less general purpose than what we already discussed. In particular, if you encapsulate the no-undo TT logging mechanism, you have something with very broad applicability which can keep track of all kinds of processes, regardless of transaction scope. See the ABL2UML code at OE Hive for an example of it in use.