.r code better on application server or database server

Posted by Admin on 20-Oct-2006 14:35

I'm looking at a 2-teir environement where we have an app server to host tomcat/appache web servers, and a database server that would host the database itself, webspeed brokers /agents. Would it not be better to store the source code (.r code) for the application on a Application Server, and then have it connect to a database server? which one would run faster and have better performance?

All Replies

Posted by Tim Kuehn on 20-Oct-2006 14:45

That depends on how your application is being used.

Self-serve clients will always be faster than client-server clients for database work, but if your clients are doing high-cpu, heavy non-db work, then it make sense to move them to a separate box(es) and have them connect to the db via a fat network pipe.

Posted by Admin on 20-Oct-2006 15:18

what is your interpetation of a self-serve client versus a client-server connection? How does this effect remote connections over a WAN, and connections via TCP/IP connections for reporting. (ie: using -N and -S in the database server startup script) We also have an application that queries the database using -S and -N to make the connection. In my database logs, I see users connected as "Usr" and batch user connections as "SRV"....what kind of resources does each kind use?

Posted by Tim Kuehn on 20-Oct-2006 15:30

what is your interpetation of a self-serve client

versus a client-server connection?

mpro -db db-name

vs

mpro -db db-name -H hostname -S servicename

How does this

effect remote connections over a WAN, and connections

via TCP/IP connections for reporting. (ie: using -N

and -S in the database server startup script)

Self-service doesn't go through the TCP/IP stack, hence it doesn't have the overhead. Hence, it's always faster for db work.

We

also have an application that queries the database

using -S and -N to make the connection. In my

database logs, I see users connected as "Usr" and

batch user connections as "SRV"....what kind of

resources does each kind use?

That's more a question for a db consultant.

Keep in mind, that telnet sessions across a WAN aren't the same as a client-server connection over a WAN.

Posted by ChUIMonster on 21-Oct-2006 16:11

What Tim Said.

Plus -N is obsolete, you shouldn't be using it.

Remote connections (-S users) generally use fewer server resources because their connections are "pooled" by the SRV user. That sounds good but the price is that they have the communications protocol overhead and they are time slicing between the other users pooled in the SRV process.

It all really depends on what you need, what your load is, what resources are available and how you are configured. It's a complex topic.

Posted by Admin on 23-Oct-2006 12:18

Tim, you said,

"Keep in mind, that telnet sessions across a WAN aren't the same as a client-server connection over a WAN"

Can you elaborate on this? Why are they different?

What kind of over head is used when conenctions are made as SRV versus USR in the db log? I only see SRV in my log when we use the application called Cyberquery (which is a reporting tool). We have 2 kinds. Full version and Runtime. When I use the full version (allows me to create reports) I don't see anything in the log file, not even my connection, but it does query the db. When I use "runtime" queries (these are queries made by users and cannot be modified but only run), I see in the log, "SRV Login usernum 123, userid username, on servername batch (742)"

So if I have a database server, with these kind of settings, should .r code reside on the db server to be most effective?

Posted by Tim Kuehn on 23-Oct-2006 12:32

Tim, you said,

"Keep in mind, that telnet sessions across a WAN

aren't the same as a client-server connection over a

WAN"

Can you elaborate on this? Why are they different?

With a telent session, all the db work is done on the remote box, and only the screen updates are sent across the wire to the remote terminal.

Client-server sessions do a lot of the db work on the local machine, which means you're sending a lot more traffic across the wire to the local machine.

What kind of over head is used when conenctions are

made as SRV versus USR in the db log? I only see SRV

in my log when we use the application called

Cyberquery (which is a reporting tool).

I'm guessing Cyberquery is connecting using ODBC, which is a client-server connection.

So if I have a database server, with these kind of

settings, should .r code reside on the db server to

be most effective?

If you're using telnet to the db machine which is running the programs, then the r-code should be there as well.

Posted by Tim Kuehn on 23-Oct-2006 12:36

duplicate message deleted

Posted by Admin on 23-Oct-2006 15:10

The problem I have is that I cannot set the _mprosrv without the -S servicename because then my Cyberquery application wouldn't work for users. BTW, CQ does not use ODBC, they use the name server and port #, and hostname to make the connection the database. We all know ODBC with Progress is a lost cause and very slow which is why we don't use it.

Is there any way to make my databases self-serivce clients and still allow my the CQ application to make a conection? I think I will have to take this up with them I suppose.

Posted by Thomas Mercer-Hursh on 23-Oct-2006 15:29

There are two ends. You can start the server with -S and not start the clients with -S and then your reporting tool can still access, but the clients will be self-serving. But, then they have to be on the same box.

Posted by Admin on 23-Oct-2006 17:04

Correct, all my remote clients are self-service meaning, they all run _progres without the "-S" parameter. I guess the runtime software of CQ must be using internally some "-S" param when it runs I cannot see this from an application point of view.

Posted by ChUIMonster on 24-Oct-2006 08:10

To attempt to answer the original question... (now that we know more)

1) You want the r-code on the box which is executing it.

2) You generally want to execute r-code as close to the database as you can (in other words you usually want to have self-service clients if you can).

So you would usually want the r-code to be on the database server and to run the _progres sessions that are being started by telnet users on that box.

There are, however, exceptions (of course).

a) You might have an under-powered database server. It may, sometimes, be beneficial to move to client/server in such a case (although given the rate of advance of Moore's law it really shouldn't be all that hard to upgrade the server in most cases...)

b) You may have a workload that is compute intensive and data access light. In such a case the importance of being close to the database is lower. (MFG/PRO is not usually viewed as being such a product.)

c) You may have mixed workloads or you may have scaled your system by placing some components (such as webspeed) on other servers. In that case you can, and should, put r-code on multiple servers. You'll need to manage synchronization if you do so.

d) You may have choosen client/server telnet sessions for reliability reasons. While I disagree with the usual reasoning behind this decision some people do do it. If so then deploy the r-code where the _progres sessions are running.

e) You may be "horizontally scaling" and find that deploying 20 some instances of r-code is a pain in the butt. If so then it isn't the end of the world to put the r-code on an exported filesystem. Especially if you're using a shared r-code library and -q is active (as it should be). R-code gets cached fairly effectively anyway (it uses the -mmax buffer) and point #1 above is more of a "nice to have" than an absolute requirement.

This thread is closed