Lock Table Overflow

Posted by ericg on 01-Mar-2010 14:18

Hi there.

I receive an error of "Lock table overflow, increase -L on server (915)" when running an ABL transaction that deletes, to be precise, 2951 records.

Also for each of those deletes a new record in my custom audit table is created for each field in the view, which comes to 29510 new audit records.

If I do not include the new audit records in the transaction then I do not get the error.

My -L parameter is the default 8192.

If I include the new audit records in the transaction and set -L to 32768 (8192 times 4) as a test then I do not get the error.

Is enlarging the -L my only option? What is the cost of doing this?

Thanks for your help.

All Replies

Posted by Admin on 01-Mar-2010 14:32

Either enlarge -L or split the whole operation into multiple transactions.

For deletions it's usually rather easy to do so. Just count to (let's say) 100 and leave the loop (a FOR EACH?), make sure it's a transaction block of it's own.

The cost: The cost for -L is just a hand full of bytes in shared memory per lock table entry. Not much. On the other hand large transactions may have a huge impact on the BI file (size and activity). That may degrade performance. Usually large transactions (very large transactions) are an indication of an issue with an accidental too large transaction (open a transaction in the login procedure to update the users last login date and potentially everything happening in the client session is in a single transaction etc.).

That's why it's often wise to look at -L carefully and not set it to 1.000.000 or more just because you can.

Large lock tables also mean that a large number of records CAN be locked. Which may disturb users operations.

On the other hand, when transaction integrity is required, you need to do a transaction.

But if deleting is your only concern (like a cleanup), do it in smaller chunks.

Posted by ericg on 01-Mar-2010 16:52

Thanks. To see the number of lock table entries use PROMON and select Activity and the number is under Record Locks? Is there a better way when

for a particular transaction or user?

Posted by Thomas Mercer-Hursh on 01-Mar-2010 17:08

The real question here is what are the business rules?  If the business rule is that you need to delete everything as a single operation and the audit creation is a part of that task, you really have no choice but to bump up the lock table.  You have already computed what you logically need and can prove that this works.  I've had a similar situation where a customer would receive a check that paid off 7000 invoices.  In the context of the application, that really needed to be a single unit of work and each invoive involved meant three records locked, so we had to set -L to something like 25,000 to provide overhead for other users.

But, if that business rule doesn't exist, then you can break the operation down to smaller batches such as Mike suggests or even down to where the transaction scope is one deletion and the associated audit trail.  The batches will be a bit faster, but the single record will give you the ability to restart on an individual record in the event of a problem and for a couple thousand records total, the performance difference may not be material.  Doing something other than the whole batch at once implies that it is OK to restart the process to finish.  That should be fine if you are doing something like deleting records from a prior period, but may not be fine if the 2900+ records are all tied to a specific base record and there would be relational integrity issues if the process aborted in the middle.

I don't think you need any fancy metering here.  You know already how the counts work, so the question is what rules do you need to follow.

Posted by gus on 03-Mar-2010 12:59

Deletion does not use many locks. When a row is deleted, it is replaced with a "placeholder" that reserves its storage location in case the deleting transaction rolls back. This is done so that if the delete is rolled back and the row restored, it will have the same rowid as before. Because the row has been deleted, we do not need to keep the lock for it so the lock table entry is removed right after the row is deleted.

So deleting 10,000 rows will not require 10,000 locks, only 1.

Creating and updating rows is another matter though. Such operations do require the locks to be in place until the transaction ends.

-gus

Posted by jmls on 09-Sep-2014 16:14

Bugger off . Most times its bad db design....

[collapse]
On 9 Sep 2014 21:41, "Paul Koufalis" <bounce-pkoufalis@community.progress.com> wrote:
Reply by Paul Koufalis

-L 100K

About 1500 concurrent users

We suspect the offending job was a batch process that will refire so I did exactly what you suggest in hope of catching it the next time around.

Then again, when one of the programmers heard our theory his eye twitched and he went back to his desk.

UPDATE: The twitch was something he saw in the batch scheduler. Someone scheduled a job with no start/end date.

Bad code once again...and the programmers wonder why we always say it's their fault.

Paul

Stop receiving emails on this subject.

Flag this post as spam/abuse.

[/collapse]

Posted by ChUIMonster on 09-Sep-2014 16:20

Who let a /coder/ into the room?

Yeesh, I thought they had security around here!


On 9/9/14, 5:14 PM, jmls wrote:
Reply by jmls

Bugger off . Most times its bad db design....

[collapse]
On 9 Sep 2014 21:41, "Paul Koufalis" <bounce-pkoufalis@community.progress.com> wrote:
RE: Lock Table Overflow
Reply by Paul Koufalis

-L 100K

About 1500 concurrent users

We suspect the offending job was a batch process that will refire so I did exactly what you suggest in hope of catching it the next time around.

Then again, when one of the programmers heard our theory his eye twitched and he went back to his desk.

UPDATE: The twitch was something he saw in the batch scheduler. Someone scheduled a job with no start/end date.

Bad code once again...and the programmers wonder why we always say it's their fault.

Paul

Stop receiving emails on this subject.
Flag this post as spam/abuse.
Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com
[/collapse]

Posted by Tim Kuehn on 09-Sep-2014 16:23

Says the DBA using the product of a legion of CODER's work. 

Talk about lack of appreciation.... sheesh!

::)

Posted by jmls on 09-Sep-2014 16:25

Yah, exactly. You db guys think that you have it all sussed. No, sireee

[collapse]
On 9 Sep 2014 22:21, "ChUIMonster" <bounce-ChUIMonster@community.progress.com> wrote:
Reply by ChUIMonster
Who let a /coder/ into the room?

Yeesh, I thought they had security around here!


On 9/9/14, 5:14 PM, jmls wrote:
Reply by jmls

Bugger off . Most times its bad db design....

[collapse]
On 9 Sep 2014 21:41, "Paul Koufalis" <bounce-pkoufalis@community.progress.com> wrote:
RE: Lock Table Overflow
Reply by Paul Koufalis

-L 100K

About 1500 concurrent users

We suspect the offending job was a batch process that will refire so I did exactly what you suggest in hope of catching it the next time around.

Then again, when one of the programmers heard our theory his eye twitched and he went back to his desk.

UPDATE: The twitch was something he saw in the batch scheduler. Someone scheduled a job with no start/end date.

Bad code once again...and the programmers wonder why we always say it's their fault.

Paul

Stop receiving emails on this subject.
Flag this post as spam/abuse.
Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com
Stop receiving emails on this subject.

Flag this post as spam/abuse.

[/collapse][/collapse]

Posted by ChUIMonster on 09-Sep-2014 16:27

Perhaps we should gang up on the common enemy -- users!

On 9/9/14, 5:23 PM, Tim Kuehn wrote:
Reply by Tim Kuehn

Says the DBA using the product of a legion of CODER's work. 

Talk about lack of appreciation.... sheesh!

::)

Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com

Posted by jmls on 09-Sep-2014 16:30

The enemy of my enemy ....

[collapse]
On 9 Sep 2014 22:28, "ChUIMonster" <bounce-ChUIMonster@community.progress.com> wrote:
Reply by ChUIMonster
Perhaps we should gang up on the common enemy -- users!

On 9/9/14, 5:23 PM, Tim Kuehn wrote:
Reply by Tim Kuehn

Says the DBA using the product of a legion of CODER's work. 

Talk about lack of appreciation.... sheesh!

::)

Stop receiving emails on this subject.

Flag this post as spam/abuse.



-- 
Tom Bascom
603 396 4886
tom@greenfieldtech.com
Stop receiving emails on this subject.

Flag this post as spam/abuse.

[/collapse]

Posted by TheMadDBA on 09-Sep-2014 16:37

What about us poor souls that are DBAs and coders...

This thread is closed