Hi, Can any one help me find a document about memory architecture of progress database.
I want to understand the memory management for progress openedge database.
Are you looking for info on how the data is on disk or for what happens in memory? Have you looked at the documentation? Is there something specific you were trying to understand or fix or is this just general curiousity?
Hi
This is just general curiousity, but I didnt find any clear document which explains how the memory is managed in progress database ( Like -B, -L, AI writes, BI )
why I am looking for details is I having memory issue in OE database. I allocated -B as 600 MB for a db and I started 10 servers. But eached server process is consuming 600 MB of memory.
Thats why I was looking for some documents which will clearly explain how to allocate memory for the db.
I'm not an expert with the DB (you should try to get Gus or Tom Bascom involved in this thread) but you are not writing which OS you are using.
The majority of the DB memory is a shared memory, so multiple processes (you 10 servers plus the one broker) access the same memory. On some OSes the memory is shown then as allocated per process.
In November Tom Bascom will be giving a presentation about OS tools to monitor progress databases at www.pugchallenge.eu - maybe you can attend the conference and learn more about it.
Sorry I missed the OS details
Its Redhat Enterprise Linux 5.3
Is there any architecture diagram for progress database which explains about how memory is used. I can find more for Oracle but not for OE.
I will attend the presentation and try to get more details
from my expirience:
top utility:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
841 root 16 0 1733m 120m 116m S 0.3 0.8 187:16.15 _progres
13904 root 16 0 945m 858m 854m S 0.3 5.4 62:01.22 _progres
RES collumn shows total memory used by process
SHR collumn shows total shared memory used by process (it is DB cache)
RES minus SRH = memory used by process
In example above it is only 4mb. not 858mb.
Maxim.
Cpu(s): 6.1% us, 11.2% sy, 0.0% ni, 78.9% id, 3.8% wa, 0.0% hi, 0.0% si
Mem: 8309192k total, 8286612k used, 22580k free, 17896k buffers
Swap: 16386292k total, 106592k used, 16279700k free, 7576704k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23467 root 16 0 682m 668m 667m S 61 8.2 8981:02 _mprosrv
29750 mfgeb 16 0 17228 9544 3700 R 60 0.1 8176:56 _progres
25178 akharas 16 0 20168 12m 4264 S 7 0.2 1:02.82 _progres
17908 root 17 0 682m 668m 667m S 5 8.2 464:23.35 _mprosrv
31027 akumargo 16 0 20448 13m 4300 S 1 0.2 0:03.45 _progres
23456 root 16 0 682m 662m 661m S 0 8.2 639:36.73 _mprosrv
Above is the output of top command. I can see three _mprosrv which is running with 668 M and 682M.
Does the process reserves that much memory ... Is it possible to reduce the amount. Bcos all the _mprosrv belongs to a single database and now server is running with very less memory available.
Server processes connected to DB Cache, thus they use all shared memory defined in -B
But real consumption = 668-667 = 1mb for each process + 667mb is used by all mprosrv as shared memory.
Thank you, than really helps me
One more thing, in my previous post top output free memory is only 22 MB. How do I find which processes is actually taking more space in memory.
Ordinary client processes use a lot temp-tables and consume a lot of memory.
Check RES - SRH for each client process and identify which one use more than others.
Maxim.
All modern operating systems are designed to deliberately maintain as little "free" memory as possible. Free memory is memory that is being wasted so the operating system tries to find a use for it. In the absence of any other need, it gets used for filesystem caches. Properly functioning operating systems give priority to programs and shrink the filesystem cache when programs need memory.
Most operating systems have a way to override this behaviour by letting you configure an upper limit on the size of the filesystem cache.
Correctly analysing the memory used by a process is tricky. Most of the tools, like top, either lie or make too many simplifying assumptions.
As long as there is not an overly large amount of paging going on, you don't need to worry about it. And analysing paging is tricky too because most operating systems use the pager for file I/O and reading and writing files can show up as paging in the tools.
For multi-user mode, when you start a database, one or more shared memory segments are created. The data structures (buffers, transaction table, connection table, lock table, transaction log buffers, server table, etc) that are housed in shared memory are shared by all processes that have direct database connections (i.e. not clients accessing the database over a network). There is one copy of shared memory, used by all the processes that need it.
The size of the shared memory area is determined by the values of a lot of the database startup configuration parameters and the block sizes of the database. The largest contributor is usually the buffer pool, whose size is determined by the value of -B. You can use promon to see the number, size, amount used, amount free, of shared memory segments.
Take a look at this document: http://communities.progress.com/pcom/docs/DOC-14078
If you search for database performance tuning on Communities, you will find quite a lot of stuff.