How Progress maintains the activity counters like record rea

Posted by George Potemkin on 20-Nov-2019 20:08

It’s a pure curiosity question, no any real issues!

How Progress maintains the activity counters like, for example, record reads? Obviously there are fields in database shared memory. Any process can increase the values of these fields without using the locks/latches unlike, for example, when they are increasing the sequence’s values. Does operation system use some tricks to queue the requests for the updates? Or likelihood of the simultaneous updates is negligibly small and can be ignored for the properties where we don’t need 100% accurate values?

Posted by Richard Banville on 20-Nov-2019 20:41

The majority of the stats collection are not performed under latch control.

In all our modern systems an update to a 64 bit field is consistent - that is the value will not be corrupted (unlike on 32 bit systems where such an update is performed under several instructions by the OS (a low part then a high part)

However, the ordering of concurrent increments are not atomic meaning that an increment could be lost.

It has been long decided that the performance cost of making every statistic update atomic  outweighs the benefit of having the increment be 100% accurate.

We could implement atomic statistics without latching however, the performance of this is still relatively poor without any visibility into the cost. (not as costly as a latch however but then you have insights into the latch activity.)

An OS atomic increment internally implements a spin loop when the value being incremented changes out from under the current executing thread.

high level operation is as follows:

1.   Attempt to Increment a variable having a current value just retrieved,

 2.  rtc from increment states if increment occurred successfully or if the value of the variable changed and therefore the increment failed.

 3.  if increment not successful, goto to step 1 (spin).

All Replies

Posted by kirchner on 20-Nov-2019 20:40

I don't know the answer to your question, never tried to figure it out, but I guess they use atomic operations offered by the hardware, probably abstracted by the OS, something like the Interlocked API on Windows: docs.microsoft.com/.../interlocked-variable-access

Posted by Richard Banville on 20-Nov-2019 20:41

The majority of the stats collection are not performed under latch control.

In all our modern systems an update to a 64 bit field is consistent - that is the value will not be corrupted (unlike on 32 bit systems where such an update is performed under several instructions by the OS (a low part then a high part)

However, the ordering of concurrent increments are not atomic meaning that an increment could be lost.

It has been long decided that the performance cost of making every statistic update atomic  outweighs the benefit of having the increment be 100% accurate.

We could implement atomic statistics without latching however, the performance of this is still relatively poor without any visibility into the cost. (not as costly as a latch however but then you have insights into the latch activity.)

An OS atomic increment internally implements a spin loop when the value being incremented changes out from under the current executing thread.

high level operation is as follows:

1.   Attempt to Increment a variable having a current value just retrieved,

 2.  rtc from increment states if increment occurred successfully or if the value of the variable changed and therefore the increment failed.

 3.  if increment not successful, goto to step 1 (spin).

Posted by Richard Banville on 20-Nov-2019 20:41

The majority of the stats collection are not performed under latch control.

In all our modern systems an update to a 64 bit field is consistent - that is the value will not be corrupted (unlike on 32 bit systems where such an update is performed under several instructions by the OS (a low part then a high part)

However, the ordering of concurrent increments are not atomic meaning that an increment could be lost.

It has been long decided that the performance cost of making every statistic update atomic  outweighs the benefit of having the increment be 100% accurate.

We could implement atomic statistics without latching however, the performance of this is still relatively poor without any visibility into the cost. (not as costly as a latch however but then you have insights into the latch activity.)

An OS atomic increment internally implements a spin loop when the value being incremented changes out from under the current executing thread.

high level operation is as follows:

1.   Attempt to Increment a variable having a current value just retrieved,

 2.  rtc from increment states if increment occurred successfully or if the value of the variable changed and therefore the increment failed.

 3.  if increment not successful, goto to step 1 (spin).

Posted by gus bjorklund on 08-Dec-2019 23:48

george:

As richb has said,

> "The majority of the stats collection are not performed under latch

control.   . . .

> However, the ordering of concurrent increments are not atomic meaning

that an increment could be lost."

but there is a bit more to the story.  many (but not all) of the

counters /are/ incremented under protection of a latch.  this happens

because often latch is being used to acccess a data structure and some

counters are incremented while the latch is held.  some of the counters

are part of the data structure protected by the latch.

so, even though there is no attempt at atomicity or isolation when

incrementing or adding to the counters, they are covered by latches

being used for another purpose.  a simple example: when the before-image

log activity counters are being updated for the number of notes written

adn number of bytes written, the BIB latch is being held to lock the

current bi buffer header.  there are many similar examples as well as

many where there is no cover provided.

>

>

>

>

This thread is closed