I'm fairly new to running ABL code over client/server but this seems to be the direction that Progress is taking, given some of the database enhancements that they are focusing on in OE 12. (And given the way PASOE seems to be so much better suited to running in it's own independent software tier).
I was trying to figure out how to troubleshoot slow remote servers, and came across the _actserver vst. I had a couple quick questions about this documentation: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dmadm%2Fserver-activity-(-actserver).html
First of all, that link has a really unusual footnote that talks about temp-tables. It seems totally off-topic and is probably just a documentation bug, right?
Secondly, I noticed a field called _Server-TimeSlice
Number of query time slice switches
This is probably equivalent to the field in the OEE management console called "interrupts". Can someone please confirm?
It strikes me that these "remote servers" can place an additional and somewhat artificial bottleneck between a remote client application and its data. It forces arbitrary remote clients to permanently co-habitate with each other - insofar as the configuration has specified (by -Ma). This living arrangement might work out very unfavorably for one of the clients (eg. if an unresponsive and sluggish PASOE client, PASN, must share a server with a greedy batch processor that posts journal entries for the entire system).
I'm trying to understand how this artificial bottleneck could be quantified, and I thought that perhaps the _Server-TimeSlice would be a good indicator. My thought is that for any server hosting multiple remote clients, I would take the number of queries (_Server-QryRec) and divide by _Server-TimeSlice to see how frequently clients are experiencing blocking because of the other co-habitants of the same server. Does that sound a reasonable metric? Are there any factors that will impact the results, other than contention with the other co-habitants of the server?
What I was really looking for is some explicit "wait time" measurement. It seems to me that if the server is receiving query requests, then it could keep track of the amount of time those requests waited before being granted the necessary "timeslices" to complete. If a "wait time" like this was exposed in a vst, then it would be a much better indicator of how much inefficiency is being artificially imposed upon our remote client queries. That would be better than dividing the number of queries by the number of timeslices. The hardware we use to host the OE database is quite beefy with more than enough memory, cpu, and SSD disk capacity to service all the clients that are connected. So it really bothers me when remote clients are sluggish, and the database server is making poor use of its resources, simply because of artificially imposed bottlenecks. I know that OE 12 will improve the situation a great deal but until then I'd like to be able to monitor for these bottlenecks.
Any pointers would be greatly appreciated. I'm hoping for ideas which leverage OEE and VST's, rather than purchasing third-party DBMS management tools. We are running OE 11.7.4.
To answer the second question, yes, the 'interrupts' in UserDetails page is mapped to _Server-TimeSlice field.
The server time slice mechanism was implemented in order to allow more equal server usage by query clients when the server has multiple clients connected. sadly, there are no configuration parameters to control its actions, nor information to disclose its efficacy.
most likely, it switches too frequently but that is just a unsubstantiated hypothesis on my part.
Will the number of timeslices (for the same set of queries) change depending on whether there are other co-habiting clients that are making concurrent query requests?
In other words if client A did all its queries from 8AM to 9AM and client B did all its queries from 9AM to 10AM then how would the number of timeslices compare to a scenario where A and B both started their queries at 8 AM and attempted to run them concurrently?
I just want to know if my metric will be valid. In order to measure the amount of contention on the timeslices of a server ( _mprosrv ), I was going to use a metric that takes Server-QryRec and divides by _Server-TimeSlice. This metric assumes that the timeslices would be much higher (relative to the number of queries) in a server that accommodates lots of clients at a time. I'm hoping the assumption is valid.
I suppose I am being influenced by the "interrupt" terminology in the OEE console. If a client isn't being "interrupted" by other clients then the number of timeslices should be lower (relative to the number of queries that are executed). But that is a very literal interpretation of an "interruption". Perhaps the number of "interrupts" are always the same, and not determined by whether there are any other clients that are doing the "interrupting".