pasoe: transactions 736% faster than with classic appserver

Posted by agent_008_nl on 14-Jan-2015 02:15

http://media.progress.com/exchange/2014/slides/track1_pacific-application-server-for-openedge.pdf 

© 2014 Progress Software Corporation. All rights reserved.

26 Pacific Application Server for OpenEdge - Performance

Classic AppServer

PAS for OE

Difference

Scalability

Client connections

221

1312

493%

Server Resources

CPU

10 CPUs

5.2 CPUs

192%

Memory

2.1 GB

670 MB

313%

Transactions

203 tps

1698 tps

736%

Client performance

OpenEdge

472ms

340ms

138%

 Transactions 736% faster than with classic appserver, that's almost unbelievable! How was this made possible?

--
Kind regards,

Stefan Houtzager

Houtzager ICT consultancy & development

www.linkedin.com/in/stefanhoutzager

All Replies

Posted by Michael Jacobs on 14-Jan-2015 05:05

The secret is in understanding the test conditions that produced that number.

The test program was OpenEdge's Automated Teller Machine (ATM), which has been in use to measure database performance from an ABL client ( in transactions ) for many years.   The test is database intensive where a large number of clients execute a few remote requests that result in large amounts of database interaction.   The test's focus is database performance from an ABL session and not application server response times.

The test was executed two different ways: bound clients (equivalent to state-aware), and unbound clients ( equivalent to stateless/state-free).   The results above show the faster of the two: bound clients where the application server requests did not have to pass client context on each request, thus lowering the network traffic overhead.

The test environment was a fast multi-cpu (24) Linux server with 16GB of memory, driven by ABL clients running on a Windows 7 OS over a fast LAN segment.

The test compared equivalent configurations where ABL clients connected to the classic AppServer and PAS for OE using tunneled HTTP connections.

   client --> AIA(Tomcat) --> AppServer

   client --> PAS for OE (using APSV transport)

The same number of ABL sessions were used in both configurations.  The classic AppServer using its one ABL session per OS process with a private shared memory connection.  The PAS for OE using one OS process holding many ABL sessions, where all the ABL sessions share a single shared memory connection.  The ABL test code and OpenEdge database are the same for both configurations.

OpenEdge engineering has spent a good amount of time looking for a definitive answer, which has so far eluded us.  Profile and test results are very consistent when using different Linux servers ( adjusted for hardware speeds ), so the increase ABL database access is real.   The best consensus is a combination of faster thread switching using a single shared memory connection in the PAS for OE agent, plus faster client to application server communications than the equivalent AppServer configuration.  

That is how the database performance number was attained.

Posted by bronco on 14-Jan-2015 06:38

Thanks Michael for shedding some light on the test method. The first thing which comes into my mind, what if you leave AIA (in the classic setup) out of the equation?

Posted by Michael Jacobs on 14-Jan-2015 07:15

Our testing shows that when you run a classic AppServer with direct TCP/IP network connections versus PAS for OE ( still using the HTTP tunneling ) the test results show equal TPS performance.   Generalization: the improvements in PAS for OE's multi-session agent and ABL database connection are cancelled out by its HTTP tunneling.  

Posted by Marian Edu on 14-Jan-2015 11:50

Thanks Michael, while that makes sense however it also beg the question... why did the other connection models got dropped and only http(s) one was kept?

Understand the focus in on pacific/cloud and http makes sense here but still there are plenty of cases where progress client is used with an appsrv, those could benefit from the new improvements while not having to bother with the http tunneling overhead.

Posted by Mike Fechner on 14-Jan-2015 11:58

Who says it's overhead. PASOE is Tomcat. Anything else than http would cause an extra layer there.

Von meinem Windows Phone gesendet

Von: Marian Edu
Gesendet: ‎14.‎01.‎2015 18:51
An: TU.OE.General@community.progress.com
Betreff: RE: [Technical Users - OE General] pasoe: transactions 736% faster than with classic appserver

Reply by Marian Edu

Thanks Michael, while that makes sense however it also beg the question... why did the other connection models got dropped and only http(s) one was kept?

Understand the focus in on pacific/cloud and http makes sense here but still there are plenty of cases where progress client is used with an appsrv, those could benefit from the new improvements while not having to bother with the http tunneling overhead.

Stop receiving emails on this subject.

Flag this post as spam/abuse.

Posted by Jeff Ledbetter on 14-Jan-2015 12:00

The other connections models were not dropped; they are still there.
 
Jeff Ledbetter
skype: jeff.ledbetter
 
[collapse]
From: Marian Edu [mailto:bounce-medu@community.progress.com]
Sent: Wednesday, January 14, 2015 11:51 AM
To: TU.OE.General@community.progress.com
Subject: RE: [Technical Users - OE General] pasoe: transactions 736% faster than with classic appserver
 
Reply by Marian Edu

Thanks Michael, while that makes sense however it also beg the question... why did the other connection models got dropped and only http(s) one was kept?

Understand the focus in on pacific/cloud and http makes sense here but still there are plenty of cases where progress client is used with an appsrv, those could benefit from the new improvements while not having to bother with the http tunneling overhead.

Stop receiving emails on this subject.

Flag this post as spam/abuse.

[/collapse]

Posted by Marian Edu on 14-Jan-2015 13:35

Jeff, my understanding was the new PASOE will only work with HTTP connection - the AIA tunneling available in previous APPSRV.

Mike, tomcat it's just an application server and since everything ends up to be simply moving data over sockets the actual protocol used there doesn't necessarily need to be HTTP... the broker might well speak several languages/dialects ;)

Posted by Michael Jacobs on 14-Jan-2015 16:09

Hi Marian,

A very reasonable question.    What we found out by prototyping was that Tomcat's 'connector' technology (i.e. network socket) is very tightly bound to serving HTTP requests, scheduling threads to execute those HTTP requests, and calling web application contexts to execute those HTTP requests.  We looked at a number of ways to leverage the connector interfaces and have it function as generic TCP/IP connection server for direct OpenEdge client connections.   Alas, no joy.   We also looked at bypassing all of Tomcat's connector technology and writing a whole new multi-client socket server module, but ran into issues with consistency in thread pool management, valves, and container security.   Thus the story ended... for now.

Not to say we are content with the first release's HTTP tunneling speed.   The story will continue...

Posted by Marian Edu on 15-Jan-2015 03:06

Michael, kudos for honest and detailed answer... using what is already available in tomcat for the first iteration makes indeed a lot of sense and I'm sure you guys will find a way to nail that down and get back 'native' connection modes in PASOE.

Think it's safe to say those not using the AIA (http tunneling) in their appsrv solution would rather keep on using the 'old' appsrv for the moment as there is no real over-all performance improvement by switching to PASOE and do the upgrade when direct connections will be made available in PASOE... wait for the story follow-up ;)

Great job nevertheless, will see how a node.js appsrv will behave but that's another story though :)

Posted by Darren Parr on 15-Jan-2015 05:03

I disagree. There may not be any significant performance improvement in a resource rich server, but doesn't the model still consume less resources (memory, cpu etc) and hence allows for better scaleability as a result.

Posted by Tim Kuehn on 15-Jan-2015 07:06

The fact you can have "1 appserver session per client" and lock the two together is the big win, all other things being equal.

This thread is closed