I'm getting this error (Lock table overflow, increase -L on server (915)) when I try to lock specific records in a specific table.
I'm only using a "find first" with "exclusive-lock no-error no-wait" and the error comes up, which is odd since I'm only locking 1 record.
And other records of the same table will lock just fine, but some specific ones won't.
I looked at the _lock virtual system table and it's empty.
Any ideas?
Thank you
I figured it out:
I was doing:
find first 'tablename' where string(rowid('tablename')) = 'somerowid' exclusive-lock no-error no-wait.
It is apparently not a good idea to do this. The right way would be:
find first 'tablename' where rowid('tablename') = to-rowid('somerowid') exclusive-lock no-error no-wait.
The issue here is that when you do a find with exclusive lock which requires a scan, i.e., it can't be resolved with an index, then the DB engine locks all the records between the cursor starting position and the record it finds eventually. This *can* be a *lot* of records! The solution is to do the find no-lock and then refind with exclusive lock having already identified the record.
Which said, as you have discovered, your source problem is doing a string comparison. It is that which is forcing the sequential search. Why save the rowid as a string?
Any "find" or "For each" statement which has a function on the table being looked up in will result in a table scan. For code which has an "exclusive-lock" spec, the engine will lock each record as it looks it up, and won't let the locks go until the TX ends.
So, the OP's single "FIND" statement was doing a table scan and locking every record during the process, thus blowing the "-L".
Thanks for the info it helps a lot.
As far as saving the rowid as a string I think it's just bad practice among our group.
I suppose using the rowid datatype would be more efficient.
Indeed. Why do two type conversions that are unnecessary, as well as using a datatype which inherently implies more processing to handle?