Use STOP-AFTER to implement batching size based on fill time

Posted by dlauzon on 21-Jan-2010 08:11

I was reading about the new STOP-AFTER in 10.2B and was thinking that I could use it to implement a batching mechanism that would be based on the time we want to allow it to fill as many records as possible in a given time instead of giving a batch size of a specific size.

E.g. To have a responsive UI, the first batch of data should take not more than one second to be fetched.

Has anyone tried something like that?  Is there any reason why I wouldn't want to do this?

Any guess about the ability / result of stopping a dataset FILL operation when it is not completed?

All Replies

Posted by dlauzon on 14-Aug-2012 09:46

For the record, it seems to work in a test environment.

The cool thing is that the batch size is now relative to the performance of the machine: fill as much records as you can in 1 second.

Posted by rbf on 15-Aug-2012 03:21

This is a very cool idea indeed!!!

Can you share a bit more infomation about how you did this, especially the implications of interrupting the fill operation?

I am not sure you always need as many records as you can get in a second, because that can make your system slow if it was fast in the first place (i.e. returning a million of records when 50 is plenty). So you could use the 1 second as a maximum rather than always return as much as you can.

On the other hand, for particular slowly queries, it does not really make sense to return 0 or 1 or 2 records when time runs out. Depending on your batching mechanism, this could even result in a long string of subsequent calls to the server until the browser/grid viewport is full (at least that is what we implemented in our GUI for .NET UI with the infragistics grid). I am not sure how to solve this issue...

So I am interested in both the technical and functional implications of this approach.

Posted by dlauzon on 17-Aug-2012 14:40

For all the reasons you mention, I wouldn't make this a default for all Fill operations, and it's too bad that the STOP-AFTER can't handle a parameter shorter than 1 second (it's not a decimal parameter and it's in seconds).  You could indeed specify a target number of records to fill, and if they are filled within one second, the STOP-AFTER would just not get triggered.

With ADM2, the batch size was set according to the specific situation of a program (you could turn it off or specify a specific number of records).  I see the same for this approach:  when you know it is a slow query, you just don't use a 1 second maximum fill.  Until we can predict in advance the length of an ABL query, it will always be a case by case scenario, as with batching in the first place.

As for my current implementation, it's just this (I got tired of waiting for a full fill every time I was testing the program, so this is not yet "live code" but "test-mode code"):

DO STOP-AFTER 1:
    BUFFER ttLanguageLabel:FILL().
END.

I didn't hit any error yet, but I guess that if you have a multi-level dataset, you might have to check if the last top-level buffer to be fetched had all it's descendants filled.

Or simpler: if the STOP-AFTER is triggered, just delete the last top-level record filled.

This thread is closed