What measure can be used to show if the database is hitting a bottle neck during periods of high create record activity? During this event all users are impacted with slow database response time. What I am trying to get to is visibility to the bottle neck. Is the issue at the DBMS or hardware level? Trying to identify what DBMS adjustments can be made to help minimize the impact on other users during these periods of high load?
With this case, resource Monitor is showing the most active disk I/O write session is prowin32.exe to 3 extents of the database. 1 is the index extent and the other is a data extent.
OpenEdge Enterprise DB, 10.2B, Windows 2008 R2
What measure can be used to show if the database is hitting a bottle neck during periods of high create record activity? During this event all users are impacted with slow database response time. What I am trying to get to is visibility to the bottle neck. Is the issue at the DBMS or hardware level? Trying to identify what DBMS adjustments can be made to help minimize the impact on other users during these periods of high load?
With this case, resource Monitor is showing the most active disk I/O write session is prowin32.exe to 3 extents of the database. 1 is the index extent and the other is a data extent.
OpenEdge Enterprise DB, 10.2B, Windows 2008 R2
Flag this post as spam/abuse.
Check:
promon/R&D/1/4/2. Status: Blocked Clients
promon/R&D/debghb/6/15. Status: Buffer Lock Queue
Take a few snapshorts and check if the same block(s) is mentioned on the few screens.
Most likely the bottleneck is an root block of some index.
Anyway check if promon reports any dbkeys from the list cretated by ObjectInfo.p:
ftp.progress-tech.ru/.../ObjectInfo.p
Regards,
George
Thank you for your replies. I did not check on blocked clients but I will the next time this case comes up.
I will verify the other settings asked about. I expect page writers are active. I expect the extents have pre-allocated space to avoid real time extension.
Check:
promon/R&D/1/4/2. Status: Blocked Clients
promon/R&D/debghb/6/15. Status: Buffer Lock Queue
Take a few snapshorts and check if the same block(s) is mentioned on the few screens.
Most likely the bottleneck is an root block of some index.
Anyway check if promon reports any dbkeys from the list cretated by ObjectInfo.p:
ftp.progress-tech.ru/.../ObjectInfo.p
Regards,
George
Flag this post as spam/abuse.
If ai is enabled for your database then you can use the ai scans to find out what blocks were a bottleneck in the past. AiScanStat.p will create a few reports. Check *.AiScanStat.Dbkey.txt and sort it by the "Upd/sec" column (descending):
The BI file can be a bottleneck if not tuned for your workload, and especially if they're at the defaults. What are your settings for:
- BI block size
- BI cluster size
-bibufs
BI empty buffer waits/sec, shown in promon R&D 2 5
Also:
- AI block size
-aibufs
You haven't mentioned yet whether you're running page writers; that's important. Are you running OE Replication? If so, the target's configuration may also affect you.
"resource Monitor is showing the most active disk I/O write session is prowin32.exe to 3 extents of the database. 1 is the index extent and the other is a data extent."
What is the third?
Regarding the sequence block access being a bottleneck... I used to suggest for the application developers to implement an "advanced" usage of the sequences: set a sequence's increment, let's say, to 100. Then when a session will get the next-value(sequence) then the session can use 100 values (up to the new sequence's value) for the records the session is going to create. Such technique will eliminate a bottleneck on sequence block.
Rob, thank you for asking. the 3rd extent was another extent of the index area.
Thank you all for the input. Where are a few answers
Replication is not in use.
3 APW's
1 Biw
1 AIW
Before-Image Block Size: 8192.
Before-Image Cluster Size: 524288.
(-bibufs): 25.
After-Image Block Size: 8192.
(-aibufs): 25.
The index extent is not growing on demand.
The data extent involved is the variable length extent.
empty buffer waits (currently when there is no issue) total 1961, per min 0, per sec 0.01
I expect I will need to get the promon values the next time the issue is current.
So the BI/AI block size and BI cluster size are set to default, and buffers are set just above the defaults. When the concern is a bottleneck in transaction throughput those are the first things I'd change. Note this does require a DB restart.
I'd suggest the following as a starting point; it may not be perfect for your workload but it's an educated guess and better than the defaults:
- BI block size 16 KB
- AI block size 16 KB
- BI cluster size 8 MB
- -bibufs 100
- -aibufs 100
Then monitor as George has suggested during periods of high transaction activity and see how your commit rate and other metrics change. Hope this helps.
> - -bibufs 100
promon/R&D/1/9. Status: BI Log -> Full buffers
I never saw the values higher than one digit number even in the very transaction active datatabases.
I don't know if it's outdated advice, but at one point there was a guideline to always set thye -aibufs to at least twice the -bibufs.
There was a rule: -aibufs = 1.5 * -bibufs.
It was a wrong rule. ;-)
E.g. http://knowledgebase.progress.com/articles/Article/19905
"Progress recommends -aibufs are set to a value of 1.5 to 2 times the value of the BI (-bibufs) parameter"
Is there a right rule?
> I never saw the values higher than one digit number even in the very transaction active datatabases.
Maybe 100 is higher than the OP needs. But I would say 25 is less than he needs, at peak. And BI buffers are an inexpensive way to spend RAM to eliminate bottlenecks, relative to -B. 100 maximum-size BI buffers = 1.6 MB of RAM; basically nothing.
Better to have a few extra and not need them than to have slightly less than are needed. :)
Right rule: -aibufs = -bibufs. ;-)
Thank you for the input, all of you.
I have an service window opportunity so I will go with Rob Fitzpatrick's settings above. The change in cluster and block size align with other research I have done.
I am glad to have the details of the promon values to check on too.