Noticed a curious thing. 1.) Set -B of the database server to a very high (in 32bit times) value, like 115000 with blocksize 8192. 2.) Connect with _progres (mpro) 3.) Run a self-service .p which connects to the same db with ABL code as self-service again (CONNECT) Result: Shared memory error
Noticed a curious thing.
1.) Set -B of the database server to a very high (in 32bit times) value, like 115000 with blocksize 8192.
2.) Connect with _progres (mpro)
3.) Run a self-service .p which connects to the same db with ABL code as self-service again (CONNECT)
Result: Shared memory error
In my experience it's impossible to address more than 700/750 MB for shared memory from the client.
32-bit executables have a total address space size limit of 4 GB. Under ideal conditions. That is the absolute maximum space for /all/ code and data accessed or used by the process. depending on the operating system, the amount available to an application may be considerabley less. Windows takes half (2 GB) for itself.
Of whatever space is available to an application, the data portion has to include stacks, heap(dynmaically allocated process-private memory), shared memory (e.g. database segments), memory mapped files (e.g. shared procedure libraries), file handles, file buffers, r-code, soer space, temp table buffers, etc, etc, etc.
The code portion has to cover all the C code of the 4GL runtime, all .dll's (c-runtime library, math library, networking, .net), etc, etc. etc.
Address space is a finite resource and is quite limited in 32-bit executables.
In theory, there is no difference between theory and practice.
In practice, there is.
still no hint why this falis:
_progress -db sports (o.k.)
CONNECT sports -ld sports1 (fails) < I connect the same database a second time
Why gets an internal CONNECT a shared memory error when trying to connect the database already connected?
Each connection must map the database shared memory into the client's memory space.
But when the database shared memory is more than the remaining addressable space in the client, this mapping cannot be done and the connect fails.
The client does not know that this is a second connection to the same db, otherwise it would tell you that it is already connected. The only way that this happen is that you give the second connection an alternative logical db name. Since the client now sees this as a "different" db, it will need to map the shared memory again.
Note that when you do this, you open yourself up for other horrors, like dead-locking yourself, as the db engine will also see the two connections as two different clients.
You can proof that with the following code.
Connect to sports2000 with -ld SportsA. When you run start.p, it will:
1. [Start.p] Create the Alias SportsB. (You will see only one connection in promon)
2. [Start.p] Run lockMySelf,p.
2.a [lockMySelf,p] Lock the customer in one buffer. Promon will show one share lock.
2.b [lockMySelf,p] Run the internal procedure.
2.c [lockMySelf,p] Read the same customer in another buffer. Promon still shows only 1 lock.
2.d [lockMySelf,p] Upgrade the lock and update the record.
2.e [lockMySelf,p] The main block display the updated value using the first buffer.
3. [Start.p] Delete the Alias SportsB.
4. [Start.p] Connect to sports2000 with -ld SportsB (You will now see two connections in promon)
5. [Start.p] Run lockMySelf,p.
6.a [lockMySelf,p] Lock the customer in one buffer. Promon will show one share lock from the first connection.
6.b [lockMySelf,p] Run the internal procedure.
6.c [lockMySelf,p] Read the same customer in another buffer. Promon shows 2 share locks on the same record, one from each connection.
6.d [lockMySelf,p] Upgrade the lock and update the record. Promon shows that the second connections queue for an exclusive lock.
At this stage the session locks itself up indefinitely.
/* lockMySelf.p */ FIND FIRST SportsA.customer. /* Take a share lock */ RUN UpdateCust. DISP SportsA.customer.name. PROCEDURE UpdateCust: FIND FIRST SportsB.Customer. ASSIGN SportsB.Customer.Name = SportsB.Customer.Name + " Test". END PROCEDURE.
/* Start.p */ CREATE ALIAS SportsB FOR DATABASE SportsA. RUN lockMySelf.p. DELETE ALIAS SportsB. CONNECT -db Sports2000 -ld SportsB. RUN lockMySelf.p.
the intention of that is a general code which checks the last backup time of the database in VST tables.
And one of the configured databases is the current database itself.
This failed after increasing -B from 20000 to 115000
Thanks for the raising the lock problem
So shared memory isn't really shared when the client wants to have another one for the same db.
Since you map the shared memory twice into the client's address space, the client sees it as different.
I can however not foresee why you ever need to connect the same database twice. If you need different records, in the same scope, you use buffers. If you need specific database names, you use database aliases.
The db engine will see the two connections as two different clients and it will apply the same concurrency rules as with any two clients trying to act on the same record.
"A distributed system is one in which the failure of a computer you didn't
even know existed can render your own computer unusable."
-- Leslie Lamport
> On Mar 31, 2016, at 4:59 AM, Stefan Marquardt wrote:
> So shared memory isn't really shared when the client wants to have another one for the same db.
the client's address space to where the memory is mapped is not shared.
this case could have been optimized. but it has not been worth it till now. since the time we first did multiple database connections in v6.2A, this has not come up more than one or two times.
O.k. connecting the same database isn't the point.
Running 2 databases with OE 11.5 32 bit, both with -B 115000 and blocksize 8192.
As a result I can't connect both database because -B from each one is too high to connect both?
prowin32 .-db db1 -db db2:
unable to attach shared memory
I need to use the slower -S connection as a solution?
You are correct.
The shared memory of all shared memory based db connections is mapped into the client's memory space, so the total of what the client needs for itself plus all the shared memory must be within the limits of what the process can address.
Using -S will solve this, but slow down the connection. However, you can mix the two.
So if the data volumes the client depends on between the two databases vary, I would make a shared memory connection to the one where the most traffic to the process is expected and use -S for the other one.
> On Apr 11, 2016, at 2:48 AM, Stefan Marquardt wrote:
> Running 2 databases with OE 11.5 32 bit, both with -B 115000 and blocksize 8192.
you have total of 230000 database buffers of size 8192, plust the other data structure in memory and then the application and client runtime and its stuff.
the shared memory is about 2 GB all by itself. If you want to know the exact size, you can look with promon at the shared memory info.
Your total address space for a 32-bit process is 2 GB on Windows but you cnnot use all of it for shared memory. No way it is going to fit.
You have the following choices:
0) get a 64-bit system and 64-bit OpenEdge executables and more memory
1) use TCP/IP for connecting to one or both databases.
2) reduce the -B setting until everything fits.
3) don't connect to both databases at the same time
Of these, I can't say which would suit you best.