Hi,
for each order : // 1
update order.field1.
for each order-line of order : // 2
update order-line.field1
end. // end 2
end. // end 1
The transaction scope is on the for each order.
The record scope for Order is on the for each order.
The record scope for Order-line is on the for each order-line.
Example : for order 1 we have 300 order-lines
=> are we keeping all the 300 order-lines in exclusive lock till we meet end 1 (the iteration of each order)?
or is each order-line exclusive lock turned back to a shared lock at end 2 (each iteration of the for each order-line)
Example : 4 orders each with 300 order-lines
=> Is per iteration of for each order the exlusive-locks on that order and all its order-lines released (turned bach to shared lock)
If we put a do transaction around the complete code : are all exclusive locks (on the 4 orders en 300x4 order-lines) kept until the end of the do transaction?
Tkx,
Jac
Yes, the records will be locked from other users for the duration of the transaction.
If you specify a smaller transaction, but lock a record before the specifically declared transaction starts, or reference a locked record after the specifically declared transaction ends without re-finding it, then the record will be locked as long as the record is in scope.
In other words, the locks will be held for the longer of the transaction scope or the record scope.
Note also that:
I have made an enhancement request a while ago, requesting for a startup parameter that would change the lock-downgrade behavior after the transaction. In my opinion, downgrading locks to no-lock (=removing locks) after transaction (even if the record scope is larger than transaction) would be a much better behavior than downgrading to share-lock. In our application, leaving the records in share-lock after transaction causes potential locking conflicts which are often very hard to debug. We have no use for share-locks, we use only exclusive-locks and no-locks in our queries.
That is not an unreasonable enahncement request for use cases like yours but it would be global behaviour too.
Some other things you can already do:
None of those things do what you asked for but they are still handy when you need them.
My instinct is to think that good programming practice eliminates the issue and that changing the behavior is likely to break a lot of existing code.
You are probably correct. Nevertheless the option may be worth having.
Yes, Thomas is right about good programming practice, and we use the techniques you, Gus, mentioned: control buffer scope, RELEASE, FIND CURRENT NO-LOCK, -rereadnolock. But nobody's perfect and every now and then somebody forgets to take care of releasing a lock. That's why a more general option would be welcome. Think garbage collection: earlier you had to do it by yourself (and sometimes you forgot to do it), now its done automatically.
Like Thomas said, using a general option could break some existing code, but in our case I do not see a big risk and the possible problems would be easy to spot and fix. Much easier than the locking conflicts we have now.
I suspect that the problems one might encounter would be trickier than you expect, but I admit I am saying that without a specific example in mind. To me, the real point is that there is a right way to get the desired behavior and no global option is going to produce that. The best it can do is eliminate one of the symptoms which helps to identify that there is a problem needing fixing.
Chances are, for example, that 90% of the conflicts relate to just a few heavily used tables ... if only because the other tables are not heavily used and thus unlikely to produce simultaneous requests for the same record. Focus on reviewing the access to those tables and you can eliminate 90% of the problem with nothing like an application-wide review. Tools can facilitate this dramatically.