Way to store really (really) long text in rollbase

Posted by anthosbadguy on 28-Mar-2016 17:24

Hello, and thank you in advance for any help you may give me.

Right now I'm trying to find a way to store data I query from Rollbase Objects- just plaint text- into rollbase so I can get that text back again somehow and use it as data for graphs and other uses, because doing a query for too many records takes quite a while everytime it is used.

I've thought about saving it as a document on a file upload field using setBinaryData(), I've thought about using formula/expression fields, document templates, and  I just can't figure a way to store all that text for later use. We're talking about more than 80.000 characters because there are going to be A LOT of records, so I can't just store it in a text area field, sadly.

There's also the possibility that I'm not thinking about this the right way, and that would not be the best solution for my problems; I just want to query the records from an object, and store the array returned by the query somewhere else so I can just get that data and use it whenever I need to without calling another query and taking so much time doing so.

Thank you for any help, once again

Posted by Thierry Ciot on 29-Mar-2016 13:36

First of all I would say: verify you have a real performance issue before tackling what you want to do J.
 
Assuming there is a real perf issue, what you are trying to do is a typical pattern of reducing dataset. 
It would help if you would describe your use case:

·        What you intend to use the reduced data set for? Displaying a graph, …  

·        And what your performance goals are?

·        How long would the reduce operation take? (under 2mns/tens of mns/hours…)

 
Anyway, let me suggest a few generic solutions:
 
You can use a trigger (on modification of data): (documentation.progress.com/.../

1)      To reduce your data and store it in one text field (with the limitations you outlined).  And honestly I would be really surprised you would hit the 80k limit.  If you do reach it, that may mean you haven’t reduced enough J.  Again it depends on your use cases but let’s say you need to display a couple of graphs, then you should easily be able to fit into 80k (if not then you may have a usability issue where you just display too much data for your user in the first place).
For details read documentation.progress.com/.../

2)      To reduce your data and create multiple object records in a “Cached Object” list. As an example, for a line graph, you would create one object record with x and y value.  That solution won’t hit the 80k limit.  Not sure if it will get you the perf you need.

 
The advantage of this solution is that it is running on every record update thus your reduced DS is always up to date.
 
Now, if your reduce operation is taking a long time, you may not want to do it in a trigger and as previous person suggested you could use documentation.progress.com/.../  
But you won’t have the choice of running it on every record update
 
Finally, if you are doing big data kind of thing, we don’t have support for map/reduce built-in but you could easily leverage a map/reduce engine as an external operation and use available API to store the data back into RB object as suggested above upon completion of the map/reduce operations.
 
Hope this helps, Thierry.
 

All Replies

Posted by jquerijero on 28-Mar-2016 17:54

Have you look into scheduled task? You can run a process that gathers your data every so often.

Posted by anthosbadguy on 28-Mar-2016 22:03

Hello! Thank you for your response, time, and help.

Are you talking about batch jobs? I've already looked into them, but if I use them to import data I will be importing it as new (or updated) records for a specific object, and if I do it like that then there's no field type that can help me store a text as long as I need it; again, I may be wrong, so if there's a way to do it through batch jobs and I'm not seeing it then I'll be very thankful if anyone can give me a clue as to how to achieve that.

Posted by jquerijero on 29-Mar-2016 10:01

I was assuming your 80K text is actually a collection of records. I would more likely just create a Rollbase object to hold those records individually (or maybe another object to hold the aggregations as I process the records through the batch job). If the 80K text is an actual data then I can only think of using a Rollbase object with one field pointing to a file.

Posted by Thierry Ciot on 29-Mar-2016 13:36

First of all I would say: verify you have a real performance issue before tackling what you want to do J.
 
Assuming there is a real perf issue, what you are trying to do is a typical pattern of reducing dataset. 
It would help if you would describe your use case:

·        What you intend to use the reduced data set for? Displaying a graph, …  

·        And what your performance goals are?

·        How long would the reduce operation take? (under 2mns/tens of mns/hours…)

 
Anyway, let me suggest a few generic solutions:
 
You can use a trigger (on modification of data): (documentation.progress.com/.../

1)      To reduce your data and store it in one text field (with the limitations you outlined).  And honestly I would be really surprised you would hit the 80k limit.  If you do reach it, that may mean you haven’t reduced enough J.  Again it depends on your use cases but let’s say you need to display a couple of graphs, then you should easily be able to fit into 80k (if not then you may have a usability issue where you just display too much data for your user in the first place).
For details read documentation.progress.com/.../

2)      To reduce your data and create multiple object records in a “Cached Object” list. As an example, for a line graph, you would create one object record with x and y value.  That solution won’t hit the 80k limit.  Not sure if it will get you the perf you need.

 
The advantage of this solution is that it is running on every record update thus your reduced DS is always up to date.
 
Now, if your reduce operation is taking a long time, you may not want to do it in a trigger and as previous person suggested you could use documentation.progress.com/.../  
But you won’t have the choice of running it on every record update
 
Finally, if you are doing big data kind of thing, we don’t have support for map/reduce built-in but you could easily leverage a map/reduce engine as an external operation and use available API to store the data back into RB object as suggested above upon completion of the map/reduce operations.
 
Hope this helps, Thierry.
 

Posted by anthosbadguy on 30-Mar-2016 08:19

Thank you very much, both of you, for your time and help with this.

Thierry, thank you, your post gave me several ideas as to what I can do to go around my problem, and it has a couple of key points that I found extremely interesting and will serve as a guide while implementing my solution, especially the "if you do reach (the 80k limit), that may mean you haven't reduced enough", and the fact that I may be willing to offer way too much data for my user to see displayed in whatever way I might display it.

Again, thank you!

Posted by Thierry Ciot on 30-Mar-2016 13:34

Glad it was useful.

Please keep us posted.  I am very interested in finding out more from your experience, in particular, the performance impact.  Also, there may be some generic solution we could put in the product. Who knows :)

Thierry.

Posted by Thierry Ciot on 06-May-2016 13:03

Any update of what you ended doing?

Just curious :)

This thread is closed