Openedge Hierarchical Queries

Posted by rrajan on 26-Aug-2013 11:25

Hi Team

I have a requirement where we need to queries the same table recursively to get all the record with the parents since one of the column is the parent Id for another row in the same table. It is similar to a B Tree structure where I might need for all the entries above and below a node. In Oracle we have Collect By and PRIOR options available, do we have built in options available in progress for such requirements.



All Replies

Posted by rohit.ramak on 26-Aug-2013 23:26

AFAIK, you will need to build you own logic, to recursively loop through the table linking the parent to the child....

Posted by Roland de Pijper on 27-Aug-2013 07:08

Hi Ramalingam,

not by default. And reading all child records in a b-tree can have a serious performance hit. Better look at a better pattern, e.g. using Nested Sets:



Posted by Patrick Tingen on 27-Aug-2013 07:42

Another option is to have a list of the ID's of all underlying records in the record itself. This may require using a CLOB field for it and triggers to automatically update the field on create, delete of movement of nodes.

It all depends on the nature of your data. If you have lots of changes on the table, a pattern like nested sets or my suggestion of saving underlying ID's will quickly become a bottleneck due to the sheer number of updates it needs to do in order to stay in sync.

Posted by Thomas Mercer-Hursh on 27-Aug-2013 13:23

Do you mean in ABL or SQL?

Posted by rrajan on 27-Aug-2013 14:09

My DB has both ABL and SQL Clients. It would be helpful if we can do that from both prespective, but we need to invoke that from an ABL Session and get the data.



Posted by Thomas Mercer-Hursh on 27-Aug-2013 14:52

Well, what works for one doesn't work for the other so I wanted to get that clear.  If you are doing this in ABL, I suggest a TT or PDS to hold the collection which you want to process and then using an IP (or method) to get a particular record where the IP has its own buffer (good practice anyway).   Thus, you are only getting one buffer load at a time and putting the data in a TT and then fetching another buffer load based on a value in the latest TT record.

I have this vague memory of PDS's maybe supporting a recursive fill, which would be version specific, but it is not something I have ever had reason to use, so all I can suggest is looking that up in the manuals ... unless some helpful person chimes in.

Posted by rrajan on 28-Aug-2013 12:47

Thanks Thomas, PDS has an option for recursive fill. I believe it should help, but the volume of data(300K) to get would be the problem now.



Posted by Thomas Mercer-Hursh on 28-Aug-2013 13:19

The volume issue seems to have three parts:

1. The volume is the volume however it is that you fetch it.

2. TTs have an intrinsic slop-to-disk capability which allows you to process very large data sets as if they fit into memory.  But, you need to pay attention to the start up parameters which control temp-table block size and number of blocks so that you are not going to disk prematurely and your memory usage is within acceptable bounds.

3. If what you want to do requires all the data to be present, you have the volume problem whether you like it or not.  If it is possible to stream your behavior, then you don't need to load it all and you should be using a record by record design.

Note that one of the possibilities for #3 is that you need all of the records at the same time, but only a limited amount of the data.  You can control that on the FILL(), even if you have to calculate something to obtain the value(s) you want in memory.

Posted by Roland de Pijper on 29-Aug-2013 05:23

In that case it would be better to implement a Closure Table. See for an explanation On page 69 of the presentation is an overview of the different alternatives and the pros and cons.

Recursive reads are great, but still it means recursively opening queries. So the number of records to read is not the problem, it's the number of queries.

Hope this helps.

This thread is closed