SYSTEM ERROR: Attempt to define too many indexes for area 6

Posted by MBeynon on 08-Jan-2015 03:00


We're getting this error in our Appserver Log file.

According to the Knowledgebase ( one cause can be :

"Static TEMP-TABLEs are defined inside non-persistent procedures that are called synchronously from within a loop that is contained inside an active TRANSACTION.  Until the transaction expires, these temp-tables cannot be removed from the DBI file."

We are using however STATIC tt definitions in class files with STATIC methods, where MODIFIER is STATIC in the ICODE definition like so:


  INDEX ComponentCode IS PRIMARY


CLASS MyClass:


I cannot find anything on the web relating to how we are using STATIC tt's within STATIC methods of a .cls file.

N.B. Obviously we are using transactions within our app.

Does anyone have any thoughts? Does the above cause stated by Progress apply here?



P.S. I should probably add that Progress also say:

"- Static TEMP-TABLEs are defined inside persistent procedures or classes that are instanced but not cleaned up properly. As with the previous case, over time there can be hundreds of persistent procedures/class instances in memory, with thousands of TEMP-TABLE definitions."

In our case, as I said we are not instaciating classes but rather calling static methods within them.

All Replies

Posted by James Palmer on 08-Jan-2015 03:36

HI Mark, which Progress version are you guys on these days?

Posted by MBeynon on 08-Jan-2015 03:38

We're on 10.2A.02 for this particular product James.

Posted by James Palmer on 08-Jan-2015 03:48

I know this isn't much help but the garbage collection has been significantly enhanced, even in 10.2B, but I also recognise you guys have some pretty significant barriers to upgrading. Hopefully someone else can chip in with a workaround.

Posted by Torben on 08-Jan-2015 05:05


we have seen same error with dynamic temp-table or data-set as output parameters from AppServer persisten procedures after 1000's of calls.

We hit the max 32000 index per database limit for the session.

Solved by making sure memory is cleaned up in finally block



Posted by Frank Meulblok on 09-Jan-2015 03:11

Any chance of testing this under OpenEdge 11.1 or later ?

Then you'd be able to use the Temp-Tables logging type to expose where and when the temp-tables actually get created in the temp-table database. (which is the point where the system error would be triggered)

Posted by MBeynon on 09-Jan-2015 03:16

We will be moving to 11x in the next few months so there will be a possibility of trying this but hopefully we'll have a solution before then!


Posted by Fernando Souza on 09-Jan-2015 07:58

To answer your question, the case you stated from the KB does not apply to a temp-table defined in a class with the static modifier. The KB is referring to a statically defined temp-table (i.e. non-dynamic temp-table).

The error indicates that there are too many temp-table instances in memory. For a temp-table defined as static in a class, there is only one instance of that temp-table for the life of the session, so that is not the case.

Now do you create dynamic temp-tables in your application? Or maybe persistent procedures with temp-tables that are not being deleted? Most common cases for this error are leaking dynamic temp-tables (or table-handle parameters when not passed by-reference), persistent procedures or object references with temp-tables defined in them. And lastly, transactions that may delay the deletion of undo temp-tables.

The DynObjects logging provides information on objects that are dynamically created and deleted. It may help identify which dynamic objects (such as dynamic temp-tables, procedures or OOABL objects) are still in memory when the error occurs.

Posted by peggycole on 12-Jan-2015 02:39

A few weeks ago we had the same problem. Caused by static temp-tables defined without no-undo in a procedure called 1000 and more times within 1 transaction... Adding no-undo after each define temp-table was our solution.



This thread is closed