Dears,
I am having the database performance issues when accessing the database from a remote machine. I did some tests to check the performance and below are the results
If i am starting the database on my local machine and connecting through my machine using a remote connection (using -H and -S) then by running a query i can get the results back in 30 seconds. I start the database using the progress default configuration and no extra parameters excluding the -S and -H
When connecting to the same database running on my machine and accessing it from the other PC (Colleague System on same network) and running the same query its 3 times slower and takes around 1 min 40 seconds. I start the database using the progress default configuration and no extra parameters excluding the -S and -H
And when i run the query on my local machine and use the database hosted on the server that is on a different switch the query is more slow and takes 2 minutes and 50 sec. Here i have tuned the database and given the extra parameters like 24 GB of memory
I tried every thing but nothing seem to be improving the performance , I know its related to either virtual server or network but our infra team disagrees and need a proof so as to make any changes
The database size is around 250 GB and i have given 24GB as -B
Our CPU is Intel Xeon E5-2660 v3 @ 2.60 GHz 2.59 GHz ( 2 processors)
Server memory 64 GB
OS windows server 2008 R2 Enterprise
Progress 11.6 Enterprise
Thanks in Advance
Hello Ezaz,
Please state the latency between the client and the server.
ping -t -l 1000 <server ip>
If this is a remote network, it's normal for performance to be bad.
That's one of the many reasons for AppServer / PASOE.
Marcelo Pacheco
Is your "query" a SQL query or an ABL client/server program? I'm assuming it is remote ABL since SQL tends to give more consistent behavior.
You haven't posted any network details. If you open task manager and go to ethernet performance, right-click and "view network details". Then run your entire "query" and track the total number of unicasts that were sent and received. The total bytes sent and received may be interesting too.
Dear Marcelo,
here is the ping output
Pinging 10.20.100.183 with 1000 bytes of data:
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Ping statistics for 10.20.100.183:
Packets: Sent = 14, Received = 14, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
i tried increasing the value from 1000 to 1473 same but after 1474 the response send by server is Request timed out
dont know if it means something
Hi Ezaz,
It should be possible to handle any packet size up to many KBs with Windows ping.
I think you have MTU problems. But this is a networking issue rather than an OpenEdge one. Check firewalls / routers / NAT along the way as well as on the workstation and the server.
Best Regards,
Marcelo Pacheco
> I know its related to either virtual server or network but our infra team disagrees and need a proof so as to make any
> changes
What is the VM network adapter type on that Windows 2008 R2 box? If this is VMware then default adapter for 2008 is e1000, which should be changed to vmxnet3.
[mention:44a028c96ca44788b729e5185220e84a:e9ed411860ed4f2ba0265705b8793d05]
We are using vmxnet3 not the default
Thanks
Dear [mention:789d7167e96d4f458fca9ee1cdc98467:e9ed411860ed4f2ba0265705b8793d05] ,
We connected the two machines with the lan cable and are getting 2ms delay in 65500 bytes. But the results are still the same its lagging. Do you have any more suggestions that could help
Regards,
Ezaz
Any chance you can share the code in question? Usually when accessing a database remotely you need to be very aware about the number of database round-trips.
Well, the in-memory TCP to TCP communication is faster than your network. Physical networks inherently give extra latency and are always slower then in-memory. As far as collegue to you vs you to db server: there are probably more switches between you and the db server than between you and your colleague. There are more (TCP) reasons why the route to your DB server can seems slow. Google for example on "buffer bloating", see: en.wikipedia.org/.../Bufferbloat
One thing you can do is to increase the OpenEdge maximum message size used between the server and the client. The default value is 1024 bytes. You should increase it to 16k. This does not help for every client-database interaction but does for many queries.
I believe I've heard that OE version 12 will have a variety of server-side enhancements to support faster client-server connectivity. It sounds like the goal might be to make the server-side "smarter", and perhaps handle more complex work items that require understanding the execution of the ABL client code at a higher level
In particular, I believe we may start seeing things like a multi-threaded database server, server-side joins, predicate resolution on the server, etc. I guess we won't know for sure until it is finally released.
ABL client-server can be slow and somewhat difficult to troubleshoot and optimize. It starts by investigating the size, quantity and direction of the network packets. If you can investigate the number of packets/unicasts that are participating in the entire query then you are half-way there! I'd recommend that you simply start with the Windows task manager. It will track the number of packets since it was first opened. Typically an ABL client/server performance problem will be correlated to large packet counts. You should see them go up into the thousands or ten-thousands! The next step is to open WireShark and see that they are rapidly alternating back and forth in each direction (ie. there is lots of chatter but little is getting accomplished very quickly).
Optimizing the code usually involves breaking down the program into lots of FOR EACH NO-LOCK table scans. That is pretty much the only trick in the book that works reliably. It will send large amounts of data in one direction really quickly and should allow something to be accomplished (whether the packet count is high or not).
Other than FOR EACH NO-LOCK there are other tricks for client-server that work in certain scenarios, including the adjustment of your database startup parameters. You can search for them in the KB.
> In particular, I believe may start seeing things like a multi-threaded database server, server-side joins, predicate resolution on the server,
> etc. I guess we won't know for sure until it is finally released.
There is a program called the Customer Validation Program (CVP found at community.progress.com/.../openedge_customer_validation_program ) that is a means for us (Progress) to solicit early feedback on features as they are being developed.
Dears
Till this time I have only worked with the progress database where the application was itself hosted on the same machine and appservers were used
My question is that is it normal to see this much lag in the remote connections and if yes then please let me know if the application is migrated to use appservers rather remote connections, will it make any difference
Ezaz,
I have a few points to make and/or questions to ask:
1. What is the test query?
To answer your question regarding C/S vs Apsv, it is imperative to understand the nature of the queries. Very short queries will be slower via AppServer as the AppServer connection cost will outweigh the query cost. In our "4GL Code Performance Workshop", Peter Judge and I talk about combining Apsv calls to avoid doing many small back-and-forths.
2. For long running queries on one table, C/S connections should be using field lists, -Mm and the various -prefetch* parameters.
I have seen very good results by configuring the database and the code to stuff many more records into each message from the server to the client. Without these parameters, you may be a) sending unused field data across the network; and b) only sending 16 records per message, regardless of the -Mm network buffer size.
3. For long running queries joining multiple tables, you are probably better off using an Apsv where the agents are shared memory connected to the DB.
Since the database server does not yet support server-side joins, it will likely be faster to run long, complicated, multi-table queries in shared memory via an Apsv agent. In this case, the setup/breakdown cost of the Apsv connection becomes negligible.
4. ProTop monitors and trends all this.
ProTop trends the speed of shared memory queries versus localhost queries (-S port -H localhost) versus client-server queries (-S port -H <other server>). This allows you to tune your network query performance and more importantly, gives you the data you need to act if a network change causes degradation. Any change will be clearly visible in our "User Experience" graph.
Regards,
>> My question is that is it normal to see this much lag in the remote connections
YES!
Basically every database record you work on must involve coordinated (blocking) effort between the client and the server, often involving two round trips. (The exception is FOR EACH NO-LOCK as I mentioned earlier.) You should definitely investigate your network activity so you are at least able to *quantify* the unicasts involved in your workload. Also google is your friend. There are lots of KB's about this stuff that you might want to read, starting with this one: knowledgebase.progress.com/.../18342 . It answers some of the questions you've been asking here.
>> and if yes then please let me know if the application is migrated to use appservers rather remote connections, will it make any difference
If the appservers run with "shared memory" connections to the database then that will be fast (like you are used to). Even the new PASOE can use "shared memory" connections to the database.
Hope this helps. Remember that most challenges with client/server development can be overcome, but they require some planning and effort. As I mentioned before, Progress knows that they need to do some work to improve client/server development and they have planned changes for OE 12. Client/server database connectivity is never going away.
These days (with docker, load-balanced appservers, etc) it is critical that application code can execute in a *separate* tier from the database. The concept of "sharing memory" with a database server is one that has worn out its welcome!
Thank you all for your answers. I think I better be first concentrating over the query optimisation. Just in case you could point me to some white paper about the query optimisation would be really helpful and I would also investigate the Unicast messages send and received, but just wanted to know how can I use the information about these unicast messages to investigate the remote connection
Hi Ezaz,
Far more important than the number of packets sent back and forth is the number of network round trips.
I work full time from a place 7500km (~5000 miles) from the Bedford office. My ping to Bedford is 140-170ms.
In my extreme case OE client/server connections over vpn can be painfully slow.
The following benchmark demonstrates it (with -Mm 16384):
DEF VAR i AS INT NO-UNDO.
DEF VAR msg AS CHAR NO-UNDO FORMAT 'x(20)'.
i = ETIME(YES).
FOR EACH orderline NO-LOCK:
END.
DISPLAY 'prefetch' @ msg ETIME(YES) @ i.
DOWN.
FOR EACH orderline FIELDS(ordernum linenum) NO-LOCK:
END.
DISPLAY 'prefetch fields' @ msg ETIME(YES) @ i.
DOWN.
FOR EACH orderline NO-LOCK NO-PREFETCH:
END.
DISPLAY 'no-prefetch' @ msg ETIME(YES) @ i WITH DOWN.
DOWN.
FOR EACH orderline FIELDS(ordernum linenum) NO-LOCK NO-PREFETCH:
END.
DISPLAY 'no-prefetch fields' @ msg ETIME(YES) @ i WITH DOWN.
DOWN.
prefetch 7786
prefetch fields 4614
no-prefetch 2073000
no-prefetch fields 2038000
The first is a simple FOR EACH orderline NO-LOCK, taking 7.8 seconds.
The second is a FOR EACH orderline NO-LOCK, taking 4.6 seconds, because 2 integer fields are sent.
NO-PREFETCH tells OE to send only one row at a time, waiting for the client to ask for the next one each time.
Notice that using FIELDS with NO-PREFETCH barely helps at all, cause what dominates is latency rather than the volume of information sent.
Notice in my extreme case it takes half an hour to read the orderline table without prefetch and still non trivial 7.8 seconds to read it with prefetch.
In your case you have a sub 1 ms latency, but let's assume your network latency is exactly 1 ms, if your processing is complex enough that it needs to wait for one million round trips, it would take a minimum of 1000 seconds to execute, because that time will be spent waiting on the network layer.
There are plenty of things you can do on the ABL that will need to wait for a round trip per record, the most common one is using locking. Any SHARE-LOCK or EXCLUSIVE-LOCK access will always imply NO-PREFETCH.
FIND statements always execute one row at a time.
Use the OpenEdge profiler to analyze execution times and find out which statement(s) are taking the longest and try to optimize those.
The -Mm parameter set to 16384 means OE can send up to 16KB at a time.
Sending a full 16KB as a single message will require 12 unicast packets each way, but will count as a single network round trip.
I suggest running this simple benchmark using your existing client/server machine pair and see how much different PREFETCH (default) vs NO-PREFETCH makes on your environment.
Best Regards,
Marcelo Pacheco