Intro:
Evolve Your Application with OpenEdge 12.1
www.progress.com/.../evolve-your-application-with-openedge-12-1
Limit on the Number of Unique Shared Sequences Increased
The maximum number of unique shared sequences in an OpenEdge database has been increased to 32K regardless of block size. The maximum number of multi-tenant sequences is now 2000.
Each sequence uses 8 bytes in a sequence block. It means that now db can have up to 16 sequence blocks (in 1K db). All sequence blocks belong to a chain that called NOCHN (bk_frchn 127). First block on the chain is still reported by mb_seqblk in Master Block.
At db startup a broker does NOT load the sequence blocks into buffer pool. Promon/Activity: Buffer Cache screen shows two sequence blocks but the screen lies (so can we trust the values of the other fields added in 11.7?).
The numbers of new messages in 12.1: from 19426 to 19512. But only two of them are related to the sequences:
Cannot add any more multi-tenant sequences to the database. (19476)
In fact a session silently crashed after loading 2431 sequences (1K db) or 2555 sequences (8K db).
The load of 2300 sequences was successful and there were no system records with the sizes approaching 32K limit. Is it really dangerous to exceed the limit in 2000 sequences? Why I did not get the error 19476?
Please increase -B to fix more sequence buffers in memory for enhanced sequence performance. (19493)
What the enhanced sequence performance means?
Under which conditions we can get the error 19493?
Thanks George,
I'm interested in the answers you find to this question ... we are rapidly approaching the 2000 seq limit in 11.7, and until 12.1 we were looking at re-doing how we use sequences.
Now we have an second option to try, but it would be great to learn early what the practical limit of sequences are.
Thanks
Mark.
Sorry your having problems using this new feature of OpenEdge 12.1.
Not sure what you are running into but more information is needed to better understand it. I also suggest you engage Technical Support as this feature should be working as described in the documentation.
I have had little problem adding 32 000 sequences on 1k, 4k and 8k databases from a linux machine running OpenEdge 12.1 via a dictionary load .df.
Because this was all being done in one transaction cached in the client, my first attempt failed with the following error:
-s stack exceeded
Running with -s 100000 solved the problem for me. Probably could be smaller but I don't have the energy to find the minimum value.
"At db startup a broker does NOT load the sequence blocks into buffer pool."
The first 2 sequence blocks are loaded into the buffer pool at database startup. The remaining are loaded on demand.
"Promon/Activity: Buffer Cache screen shows two sequence blocks but the screen lies (so can we trust the values of the other fields added in 11.7?)."
I'm not sure how to address this statement. The values listed there should be correct at the time the buffer pool was scanned. They will change based on other activity occurring on the database. I believe they are correct and you can trust the fields added in 11.7. In OpenEdge 12.1, if you were to access a sequence out of each sequence block after database startup you should see that the expected # of sequence blocks are reported.
"The load of 2300 sequences was successful and there were no system records with the sizes approaching 32K limit."
I don't see a connection between max record size and the # of sequences.
"Is it really dangerous to exceed the limit in 2000 sequences?"
No it is not. As mentioned earlier in this post, it is expected that this new feature is working.
"Why I did not get the error 19476?"
Presumably because your attempt failed for some other reason than exceeding the maximum # of sequences. Maybe you encountered the "-s" client error as I did.
"Please increase -B to fix more sequence buffers in memory for enhanced sequence performance. (19493)
"What the enhanced sequence performance means?"
"Under which conditions we can get the error 19493?"
This enhancement is expected to be released with OpenEdge 12.2. in order to improve the concurrent access to sequences, an in-memory db technique will be used to access the data for sequence read requests. In support of that, the sequence blocks must be "anchored" in the buffer pool, that is prevented from being paged out of the buffer pool. In the case of ridiculously small buffer pools or deployments with many tenants and tenant specific sequences and relatively small buffer pools this message will appear stating that the performance enhancement will not be used to access sequence data.
One other clarification, with 32,000 sequences:
1K db: Sequence Blocks 256 (why do we still support 1K dbs?)
2K db: Sequence Blocks 128
4k db: Sequence Blocks 64
8k db: Sequence Blocks 32
To add to Rich's response:
Are you trying to add Multitenant sequence or shared sequence ? Maximum number of 32000 sequence is supported for only shared sequence in 12.1 This will be supported for Multitenant sequence in 12.2.
Multitenant sequences can exist in only first two sequence blocks in 12.1. This means, there can be maximum 2000 MT sequences for 8K blocks. This is similar to behaviour in 12.0 and previous versions. You will get message 19476, if you try to add MT sequences once first two sequence blocks are full.
Message 19493 is for sequence performance work in 12.2. You will not encounter this message in 12.1 .
Please increase -B to fix more sequence buffers in memory for enhanced sequence performance. (19493)
Tests confirmed that in V12.1 the maximum number of sequences is 32000.
In V12.0 or earlier it’s 250 * (Block Size / 1024).
An attempt to load an additional sequence causes prodict/_lodsddl.p to issue the message:
"Maximum number of sequences has been reached."
CREATE _Sequence. ASSIGN _Sequence.
It will issue the error # 9335:
Cannot add any more sequences to the database. (9335)
> Running with -s 100000 solved the problem for me. Probably could be smaller but I don't have the energy to find the minimum value.
Loading only one sequence vs. 2000 sequences at once with the -y option:
Memory usage summary: Current Max Used Limit (Bytes) Stack usage (-s): 112 16064 131072 Stack usage (-s): 112 68120 131072
Max Used size is increased by 52,056 bytes or 26 bytes per sequence.
Loading 4000 sequences at once did result in the error:
WARNING: -s stack exceeded. Raising STOP condition and attempting to write stack trace to procore file. Consider increasing -s startup parameter.
Memory usage summary: Current Max Used Limit (Bytes) Stack usage (-s): 112 132896 131072
Max Used size is increased by another 64,776 bytes (32+ bytes per sequence).
Despite the error all 4000 sequences were successfully loaded.
In my yesterday’s tests there were no errors. The session just crashed with procore file:
*** ABL Call Stack *** Last action: BLOCK HEADER (2) 1111: prodict/dump/_load_df.p (prodict/dump/_load_df.r) 57: prodict/load_df.p (prodict/load_df.r)
If I load the sequences by the chunks of 2000 the time to load a chunk is growing with the number of previously loaded sequences:
Total SeqNum Time to load 2000 seq ------------ --------------------- 2,000 4,262 4,000 10,371 6,000 15,358 8,000 20,400 10,000 25,057 12,000 29,664 14,000 34,207 16,000 38,693 18,000 43,399 20,000 47,910 22,000 52,554 24,000 56,796 26,000 61,376 28,000 66,216 30,000 70,517 32,000 75,371
Test procedure:
DEFINE VARIABLE vDfFile AS CHARACTER NO-UNDO INITIAL "_seqdefs.df". DEFINE VARIABLE vMyLog AS CHARACTER NO-UNDO INITIAL "seqload.log". DEFINE VARIABLE vMaxSeq AS INTEGER NO-UNDO INITIAL 34000. DEFINE VARIABLE vSeqChunk AS INTEGER NO-UNDO INITIAL 2000. DEFINE VARIABLE vSeqNum AS INTEGER NO-UNDO. DEFINE VARIABLE vTime1 AS INTEGER NO-UNDO. DEFINE VARIABLE vTime2 AS INTEGER NO-UNDO. REPEAT vSeqNum = 1 TO vMaxSeq: IF vSeqNum MOD vSeqChunk EQ 1 THEN OUTPUT TO VALUE(vDfFile). PUT UNFORMATTED "ADD SEQUENCE ~"SEQ" vSeqNum "~"" SKIP " INITIAL " vSeqNum SKIP " INCREMENT 5" SKIP " CYCLE-ON-LIMIT no" SKIP " MIN-VAL 1" SKIP(1). IF vSeqNum MOD vSeqChunk NE 0 THEN NEXT. PUT UNFORMATTED "." SKIP "PSC" SKIP "cpstream=undefined" SKIP "." SKIP "0000184682" SKIP. OUTPUT CLOSE. ASSIGN vTime1 = ETIME. RUN prodict/load_df.p(INPUT vDfFile). ASSIGN vTime2 = ETIME. OUTPUT TO VALUE(vMyLog) APPEND. PUT vSeqNum (vTime2 - vTime1) SKIP. OUTPUT CLOSE. END.
Dbanalys:
256 sequence block(s) found in the database.
> why do we still support 1K dbs?
For the tests! Tests should be extreme. :-)
“A kilogram of green is greener than half a kilo.”
“One square centimeter of blue is not as blue as a square meter of blue."
> I don't see a connection between max record size and the # of sequences.
I expected the limit for the sequences is similar to the one for the number of the fields per table where it’s limited by the size of _File._Field-map. I was wrong: 32000 sequences looks like an artificial limit.
> I'm not sure how to address this statement. The values listed there should be correct at the time the buffer pool was scanned.
Mia culpa. In my yesteday’s tests I filled the small buffer pool by the sequence blocks and then I checked the “Activity: Buffer Cache” screen in promon. But I forgot to update the screen with ‘L’ or ‘U’ action. As result I saw the values as they were when I entered R&D level. The “* Blocks” fields are the status ones but they use the rules other than the fields on Status screens.
> The first 2 sequence blocks are loaded into the buffer pool at database startup. The remaining are loaded on demand.
Indeed:
proutil -C dprpr:
0000 bk_dbkey: 0x00000060 96 bk_type: 0x06 6 (Sequence Block) bk_frchn: 0x7f 127 (NOCHN) bk_incr: 0x0001 1 bk_nextf: 0x00000f40 3904
promon/Status: Cache Entries
Num DBKEY Area Hash T S Usect Flags Updctr Lsn Chkpnt Lru Skips 137 96 6 21 S 0 F 126 0 0 0 99 138 3904 6 18 S 0 F 131 0 0 0 99
So mb_seqblk and bk_nextf are loaded in buffer pool at startup.
What became a sequence block will be a sequence block forever, even if we will delete all sequences. The chain will not be changed. I guess it's a reason why the sequence blocks did not get their chain number. The likehood of the chain corruption is next to zero.