memory consumption appserver w/ shared mem connections vs ne

Posted by bronco on 28-Jul-2015 03:59

OE10.2B08 on Linux, still 32 bit

At a customers site the AppServer is utilized w/ a shared memory connection to multiple (5) databases. The problem we are facing is that the AppServer grows so big (2GB) that the AppServer (broker) no longer serves any requests. Part of the reason why the appserver agents grow so big is an a quite hard to find memory leak, which obviously needs to be fixed. In our testing however, we tried to connect our agents to the databases via -S -H as well and much to my surprise we hardly any memory leakage. Memory consumption is almost flat. Before proceeding my question is: what could be an explanation for this?

All Replies

Posted by James Palmer on 28-Jul-2015 04:24

It's not a permanent fix for memory leaks, but it should make the problem go away whilst you're tracking them down: Create an unnamed widget pool at the top of every .p that is run on the AppServer and any handle based objects that are scoped to that procedure will get cleaned up.

Posted by ChUIMonster on 28-Jul-2015 04:38

How are you measuring memory consumption?

Most tools provide very misleading results when shared memory is involved.

--
Tom Bascom

On Jul 28, 2015, at 5:00 AM, bronco <bounce-bfvflusso@community.progress.com> wrote:

Thread created by bronco

OE10.2B08 on Linux, still 32 bit

At a customers site the AppServer is utilized w/ a shared memory connection to multiple (5) databases. The problem we are facing is that the AppServer grows so big (2GB) that the AppServer (broker) no longer serves any requests. Part of the reason why the appserver agents grow so big is an a quite hard to find memory leak, which obviously needs to be fixed. In our testing however, we tried to connect our agents to the databases via -S -H as well and much to my surprise we hardly any memory leakage. Memory consumption is almost flat. Before proceeding my question is: what could be an explanation for this?

Stop receiving emails on this subject.

Flag this post as spam/abuse.

Posted by bronco on 28-Jul-2015 04:46

I'm using the "resident memory" in the AppServer status page in OpenEdge Explorer. It rises till about 2GB and then run into issues with concern to the broker not being available at all to handle any further requests.

Posted by bronco on 28-Jul-2015 04:48

I'm afraid my challenge lies with objects contained in object which are kept alive between call (via a factory). I already started adding some use-widget-pool to the class statements, to no avail yet. Working on it.

Posted by Rob Fitzpatrick on 28-Jul-2015 06:12

>  It rises till about 2GB and then run into issues with concern to the broker not being available at all to handle any further requests.

Is the problem with the AppServer broker (java process) or with the AppServer agent (ABL client)?

Posted by bronco on 28-Jul-2015 06:28

well, it seems that the agent runs out of memory and a consequence the broker no longer accepts new requests. So the ABL client runs out of memory, when connected via shared memory that is.

Posted by Libor Laubacher on 28-Jul-2015 06:34

How do you know that it runs out of memory? And what does it mean 'broker no longer accepts new requests"? That all the agents are busy up to maxAgents setting? So what are the remaining agents doing? What you described (especially when saying that when connecting C/S there is no "leak") sounds to me like the _proapsv in shared memory mode simply attaches to a database(s) buffer pool, eg shm segments - hence the big number there - there is nothing abnormal about that. If there was a leak, then you would see it no matter the type of connection.

Posted by Frank Meulblok on 28-Jul-2015 06:35

"So the ABL client runs out of memory, when connected via shared memory that is."

Part of that is expected: Any shared memory segments used by a process count towards the virtual address space limits - which for 32-bit processes is a 2gb limit.

If you're using shared memory database connections, that includes the databases' buffer pools which can add up quickly.

Posted by bronco on 28-Jul-2015 08:27

[quote user="Frank Meulblok"]

"So the ABL client runs out of memory, when connected via shared memory that is."

Part of that is expected: Any shared memory segments used by a process count towards the virtual address space limits - which for 32-bit processes is a 2gb limit.

If you're using shared memory database connections, that includes the databases' buffer pools which can add up quickly.

[/quote]

I know, but this was an answer to one of the question. The entire case is stated in the OP. My question was whether here's a reason (or explanation) why the networked agents stays pretty much flat where as the SM agents rising after each call. 

Posted by ChUIMonster on 28-Jul-2015 08:58

I believe that is just the RSS from "ps".  That is pretty much useless as  a measure of memory consumption.

For possible memory leaks you need to look at private memory, not resident size.

This will be much more helpful:

stackoverflow.com/.../in-linux-how-to-tell-how-much-memory-processes-are-using

Adapting one of the scripts shown to take the target process PID:

#!/bin/bash
#
MYPID=$1
echo "=======";
echo PID:$MYPID
echo "--------"
Rss=`echo 0 $(cat /proc/$MYPID/smaps  | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc;`
Shared=`echo 0 $(cat /proc/$MYPID/smaps  | grep Shared | awk '{print $2}' | sed 's#^#+#') | bc;`
Private=`echo 0 $(cat /proc/$MYPID/smaps  | grep Private | awk '{print $2}' | sed 's#^#+#') | bc;`
Swap=`echo 0 $(cat /proc/$MYPID/smaps  | grep Swap | awk '{print $2}' | sed 's#^#+#') | bc;`
Pss=`echo 0 $(cat /proc/$MYPID/smaps  | grep Pss | awk '{print $2}' | sed 's#^#+#') | bc;`
Mem=`echo "$Rss + $Shared + $Private + $Swap + $Pss"|bc -l`
echo "Rss     " $Rss
echo "Shared  " $Shared
echo "Private " $Private
echo "Swap    " $Swap
echo "Pss     " $Pss
echo "=================";
echo "Mem     " $Mem
echo "=================";

Posted by TheMadDBA on 28-Jul-2015 09:29

To clarify what others are saying and to add my two cents.... There is no difference between local process memory allocation between a client server connection and a shared memory connection.

I don't know of any bugs that cause local memory allocation issues with shared memory connections. If you are running the same R-code in the same way the client will react the same way.

If you were truly hitting a 32 bit limit with the appserver process it would die and generate a core file. Use the following kill command to generate a protrace file (does not actually kill the process)

kill -SIGUSR1 <PID>

Use these KBs to help you determine if/where a leak is actually happening...

knowledgebase.progress.com/.../P133306

knowledgebase.progress.com/.../P124514

This thread is closed