Fill of a common dataset object

Posted by Admin on 24-Apr-2007 20:33

I have a dataset that has a composite and a component. They can all be filled within the composite object, but if I want to keep all the component processing within their own object then I should fill the composite TT within the composite object and then pass the DS to the component object to populate the components.

I did a simple dataset fill test as I suspected that the component fill would not occur with the aim was to see if it would work in a modified "auto-edge" framework. But my first test was within the one procedure.

This works ok.

DEF TEMP-TABLE TTadfComposite NO-UNDO

FIELD procName AS CHAR

INDEX procName IS PRIMARY UNIQUE procName.

DEF TEMP-TABLE TTadfComponent NO-UNDO

FIELD procName AS CHAR

FIELD componentName AS CHAR

INDEX componentName IS PRIMARY UNIQUE procName componentName.

DEF DATASET dsAdf FOR TTadfComposite,TTadfComponent

DATA-RELATION adfComponent FOR TTadfComposite,TTadfComponent

RELATION-FIELDS (procName, procName) /REPOSITION/

.

DEF QUERY qadfComposite FOR adfComposite.

DEF DATA-SOURCE srcadfComposite FOR QUERY qadfComposite adfComposite KEYS (procName).

QUERY qadfComposite:QUERY-PREPARE("FOR EACH adfComposite WHERE procname BEGINS 'f'").

BUFFER ttadfComposite:ATTACH-DATA-SOURCE(DATA-SOURCE srcadfComposite:HANDLE,"").

DEF DATA-SOURCE srcadfComponent FOR adfComponent KEYS (procName,componentName).

BUFFER ttadfComponent:ATTACH-DATA-SOURCE(DATA-SOURCE srcadfComponent:HANDLE,"").

DATASET dsadf:FILL().

BUFFER ttadfComposite:DETACH-DATA-SOURCE().

BUFFER ttadfComponent:DETACH-DATA-SOURCE().

But as suspected changing the fill as below doesn't work.

BUFFER ttadfComponent:FILL-MODE = 'no-fill'.

DATASET dsadf:FILL().

BUFFER ttadfComposite:FILL-MODE = 'no-fill'.

BUFFER ttadfComponent:FILL-MODE = 'merge'.

DATASET dsadf:FILL().

I can get it to work as follows. - Note the component query.

DEF TEMP-TABLE TTadfComposite NO-UNDO

FIELD procName AS CHAR

INDEX procName IS PRIMARY UNIQUE procName.

DEF TEMP-TABLE TTadfComponent NO-UNDO

FIELD procName AS CHAR

FIELD componentName AS CHAR

INDEX componentName IS PRIMARY UNIQUE procName componentName.

DEF DATASET dsAdf FOR TTadfComposite,TTadfComponent

DATA-RELATION adfComponent FOR TTadfComposite,TTadfComponent

RELATION-FIELDS (procName, procName) REPOSITION

.

DEF QUERY qadfComposite FOR adfComposite.

DEF DATA-SOURCE srcadfComposite FOR QUERY qadfComposite adfComposite KEYS (procName).

QUERY qadfComposite:QUERY-PREPARE("FOR EACH adfComposite WHERE procname BEGINS 'f'").

BUFFER ttadfComposite:ATTACH-DATA-SOURCE(DATA-SOURCE srcadfComposite:HANDLE,"").

DEF QUERY qadfComponent FOR ttadfComposite, adfComponent.

DEF DATA-SOURCE srcadfComponent FOR QUERY qadfComponent ttadfComposite, adfComponent /KEYS (procName)/.

QUERY qadfComponent:QUERY-PREPARE("FOR EACH ttadfComposite, EACH adfComponent WHERE adfComponent.procname = ttadfComposite.procname").

BUFFER ttadfComponent:ATTACH-DATA-SOURCE(DATA-SOURCE srcadfComponent:HANDLE,"").

BUFFER ttadfComponent:FILL-MODE = 'no-fill'.

DATASET dsadf:FILL().

BUFFER ttadfComposite:FILL-MODE = 'no-fill'.

BUFFER ttadfComponent:FILL-MODE = 'merge'.

DATASET dsadf:FILL().

BUFFER ttadfComposite:DETACH-DATA-SOURCE().

BUFFER ttadfComponent:DETACH-DATA-SOURCE().

Unfortunately the copy of "auto-edge" won't support this, but it could be modified to cope with this. Also it means that all of the component object processing would have to be dynamic as it will be receiving different datasets.

However, my main question is on the fill technique. Is there another way that you can fill one table in one object and fill another is another object, but only populate the children of the parents filled? The fill would usually fill 1 parent and then the children of that parent.

Miles

All Replies

Posted by Thomas Mercer-Hursh on 25-Apr-2007 11:24

I presume that you are trying to do this in order to adhere to the concept of a data source object as promulgated by AutoEdge?

While I think this concept has some merit, I also think it needs to be applied with some care. In particular, I think the unit of division should be the object, not the table. Thus, if one has a Order object which includes order line data, customer data, allocation data, etc., then this is really one object, even though it is many tables, and the fill should occur in one place. But, if there is, for example, a set of item data, e.g., stock levels, which is not always required in association with the order data, but which is required when allocating stock to the order, then that item data should be collected in a separate object ... and in a separate dataset, if one is holding all this stuff in datasets.

I.e., I'm not sure that I would be inclined to do what you are trying to do here.

Posted by Admin on 25-Apr-2007 16:56

Yes that's what I was aiming for and I didn't send it as a 4gl question but nether the less I would still be interested in how this is best achieved with a dataset.

The example was only an example. The true situation has a bond, a premises, and a tenant. The premises is actually a property and the tenant a party and both those tables are extensively used in may objects.

Oera was only a prompter and the issue exhibited itself when creating a staff (another common table) member in a registration. The staff table was part of the registration object. But latter we had staff maintenance and had the issue of code duplication and consistency.

My feeling is that a fill is fairly straight forward and things such as calculated fields are done in call-backs and can be attached to the appropriate dataset. The real issue seems to be in the "save" when there can be a lot of code complexity. The save seems to be easier to pass from object to object by setting "ignoreBuffers" appropriately.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 17:27

I recognize that there is some issue of possible code duplication, but I still harken back to the principle of clustering by object, whether or not OO is being used. Anything else seems to me to raises problematic issues about transaction scope.

E.g., you use the word "party". I don't know if this is parallel to the party table in Siebel, but let me guess that it is. I.e., there is one table in which all persons are stored and then when the person is an employee the employee specific data is in another table whereas if the person is a prospect, that data is in a prospect table. All people and, indeed, all companies, are in the party table, but only an employee party has data in the employee table and only a prospect party has data in the prospect table.

With a schema like this ... which, btw, strikes me as a very nice example of applying OO ideas of inheritance to a relational schema ... then there is certainly going to be some similar code between the updating of an employee and the updating of a prospect because both involve the party table, but I feel it would be inappropriate to have a separate data source to deal with that common portion. If anything, in fact, this is a place where there should be inheritance in the data source objects so that the common portion is defined in the common ancestor.

Posted by Admin on 25-Apr-2007 17:46

At this customer site a party/person was used extensively in many table. In the last major upgrade all these persons where move to a party table. A party here may be an agent, lessor, tenant, other related party, mediator/conciliator etc etc.

The staff I was referring to is another table and in particular it was a staff member of an agent that will be logging in. In this case at registration the validation ended up inconsistent with staff maintenance.

The property is more specific as a party may have some variation, while the property is basically an address.

As it is I'm fighting for a better was in the "auto-edge" environment we have, I'm not going to convince anyone at this stage to OO, althought I'm personally investigating it.

In the "auto-edge" environment I'm reasonally happy with a fill by object but not so on the save.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 18:01

OK, so the table issues are somewhat related, but their own flavor.

And, I get that you aren't going to go OO, but you know it is possible to imitate ... although whether you can do it with a PDS is not a question that I can answer at this point.

And, I would be a little cautious about tying yourself too strongly to the data source concept as manifest in the released AutoEdge code. You will note that this has changed in the OO OERA stuff, so obviously even PSC is having second thoughts.

If you step back from the specifics of doing it with a PDS for the moment, the usual imitation of inheritance is to use a super. So, the fill of the common portion is done in the super and then multiple procedures might use that same super. But, I don't know how a PDS ties in here. You might have to do something like return a temp-table from the super and then fill the PDS from the temp-table instead of the database.

Posted by Admin on 25-Apr-2007 18:23

tying yourself too strongly to the data source concep

Again I'm fighting this issue and successfully demonstrated some problems with current-changed in our version of auto-edge framework.

You will note that this has changed in the OO OERA stuff, so obviously even PSC is having second thoughts.

In what way was it changed? Is there a published link to OO OERA code examples?

done in the super and return a temp-table

I had thought about a TT on the fill but Don't like the overhead. Yes I can use supers but it would be much simplier to pass it from object to object and have each object do it's bit.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 18:50

http://www.psdn.com/library/kbcategory.jspa?categoryID=1212 has the

webinar and the white papers with code. I recall it specifically in

the webinar. I'm afraid I can't really advise on the best structure

using a super because I would structure it differently and not get

locked into this approach. How does one handle the transaction

scope with it split like that?

Posted by Admin on 25-Apr-2007 19:15

has the webinar and the white papers with code

Thanks, I'll look into that.

How does one handle the transaction scope with it split like that?

Good point which is one I raised earlier on the peg. At stage I've got agreement that we will fill the object and looking into the save which is not an issue during this milestone but will be in the next.

If the save was applicable as I described, in beBond saveChanges (forgetting any PDS issues) I could off the top of my head

run bondValidation passing the dsBond dataset of ttBond by-reference

run premisesValidation passing the dsBond dataset of ttPremises by-reference

run tenantValidation passing the dsBond dataset of ttTenant by-reference

do transaction:

run saveChanges in daBond (dataset dsBond by-reference).

if dataset dsBond:error then

undo, return.

run saveChanges in beProperty (dataset dsBond by-reference).

if dataset dsBond:error then

undo, return.

.

.

.

end.

This would mean that any code in beProperty or beTenant/beParty is dynamic and they will be receiving different dataset.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 19:53

Reminds me of that tag line that Tom

Bascom had in his .sig for a while, something to the effect of "You

start coding, I'll go find out the specs". I know it isn't your

fault, but running a project like that is nuts. Given that there

are database level transactions and business logic transactions,

where the former use the built-in DB transaction roll back and the

later use an external mechanism because the transaction may be long

running and distributed, I would object on principle to transaction

scope in the business entity unless it is of the later type and

managed that way. Otherwise, the scope should be down in the

database layer.

Posted by Admin on 25-Apr-2007 20:52

but running a project like that is nuts.

It's not running like it aught to (understatement). The project manager made a comment that we should build the project (as a learning exercise) scrap it then write it how it should be done.

I would object on principle to transaction scope in the business entity unless it is of the later type and managed that way.

I agued the same thing earlier on, but you're providing an exception. In this case you could also argue that the transaction scope is a business decision. Can you have a bond without a property and tenant and do you allow partial saves on an update? Normally that is a business requirement.

Posted by Admin on 25-Apr-2007 21:27

but running a project like that is nuts.

Actually it sounds worse than it is. The project is broken into milestones. At each milestone the user requirements refined and the development is designed. We are at M4 design and have learnt from the problems in earlier milestones.

The biggest shortfall is that the framework should have been reviewed to determine it's appropiateness for the task and then tested and resolved how we were going to handle these basic concepts, which might ultimately mean a rejection on appropiateness.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 23:46

Well, maybe you're being polite, but I still think that one needs to figure out fundamentals before one starts spewing out code. With a generator-based environment, one gets a certain latitude in this because on can always change the templates or the rules and generate code, but I don't see how the sort of thing you have described doesn't lead to almost predictable massive rework once the light dawns.

Posted by Thomas Mercer-Hursh on 25-Apr-2007 23:54

Oh well, if you can afford to write the whole thing twice, then why not.

In the contrast I was making, there is not an separation based on business requirement or something else. All transactions are business requirements. The issue is that in the classic monolithic app, one would wrap every business transaction in a database transaction ... nice, neat, and absolute. But, in the distributed world, this isn't practical. Not only might part of the transaction be remote, engendering all the issues of two phase commit, but it might not even be attached due to some network outage. Thus, the long running transaction and the different ways of handling those. Those extreme cases illuminate for us that we can't try to shovel everything into a database transaction, even if, for now, everything is on one system. Simply broadening the transaction scope in this fashion is really no different than the historical practice of including UI in transactions. It worked, but it isn't the right way to design applications today.

Posted by Admin on 26-Apr-2007 13:15

However, my main question is on the fill technique.

Is there another way that you can fill one table in

one object and fill another is another object, but

only populate the children of the parents filled?

The fill would usually fill 1 parent and then the

children of that parent.

The first question you should ask yourself: do you think you will understand the code when you read it again in three weeks time? The DATA-SOURCE object is fine for certain scenarios, but more complex things can be handled differently. You can always create a persistent procedure/OOABL class to avoid code duplication. You can let the DATA-SOURCE call back into that procedure as well as a bonus.

I wouldn't go the route where I would have to enable and disable DATA-SOURCE/BUFFER properties all the time, just to get it to work with a DATA-SOURCE. The solution would be too fragile and very dependent on the Progress version/patch you're working with.

Posted by Admin on 26-Apr-2007 16:58

do you think you will understand the code when you read it again in three weeks time?

setContext

run ip in beProperty.

if ds:error do stuff.

setContext

Is not to hard to understand, particularly when it's you standard practice. I don't think that the data-source is an issue as it is only the primary dataset that has multiple tables. In the primary procedure I don't think that it matters if the data-source is attached but you never use it.

create a persistent procedure/OOABL class

I accept that running standard code is an option but I still don't think that it's ideal. For example you may add an after row save trigger and you'd have to ensure it's added everywhere. Perhaps that would be set in the first ip you run. Perhaps you'd setup 2 supers and attach one to your be procedure and one to your da for each common object. However I seem to have some vague recollection that a dataset callback won't find a procedure in a super. I'll have to test the whole concept.

Posted by Thomas Mercer-Hursh on 26-Apr-2007 17:36

I think one of the things I am trying to advocate here is that, if you split this into two procedures, then they should be layered, not parallel.

Posted by Admin on 26-Apr-2007 18:04

they should be layered, not parallel

Not sure that I understand the implications of this. Can you give a pseudo example.

Posted by Tim Kuehn on 26-Apr-2007 18:04

However I seem to have some vague recollection that a dataset callback won't find a procedure in a super.

No, it won't - the code which performs the callback is an in-line C++ expansion which runs a specific program and doesn't search the IP SP stack.

Apparently the work-around is to have the IP do the call which'll resolve to the SP.

It's one of those "make things easier for the implementor at the expense of the user" situations.

Posted by Thomas Mercer-Hursh on 26-Apr-2007 18:15

By layered I mean something like either inheritance or having common code in a super.

Posted by Admin on 26-Apr-2007 18:39

That's what I was suggesting as well but you have to invoke it. So when beBond is run I would attach a super that has the code for beProperty and beTenant, and similar for daBond. I would also run an ip in each to set calbacks and any other initialisation stuff. Then when saveChanges is run I would do bond validation, run beProperty and beTenant validation in super etc.

This would have to be done whereever I use a common object and was hoping for an easier way.

Posted by Thomas Mercer-Hursh on 26-Apr-2007 19:01

Well, inheritance would be cleaner, but that isn't an option.

Posted by Admin on 28-Apr-2007 00:41

I've had a very quick look at the OO OERA code examples.

I didn't like how the data-source was connected in the auto-edge we were using and liked the OO example better, however, there was no option to exempt or list fields to be populated.

There was a lot of new concepts that I will have to investigate but I noticed that in fetchData of the businessEntity the PDS handle was set and then it seemed to be referrenced in loadData(). Is that usual or is that just lazy programming? Would normally pass the handle as an input? The setting of the handle outside the method seemed to be common.

inheritance would be cleaner

From you comments I'm not sure that you are overly familiar with PDS's but could you provide an example of inheritance being used to load or save a common table with callBack?

Posted by Admin on 28-Apr-2007 08:44

There was a lot of new concepts that I will have to

investigate but I noticed that in fetchData of the

businessEntity the PDS handle was set and then it

seemed to be referrenced in loadData(). Is that

usual or is that just lazy programming? Would

normally pass the handle as an input? The setting of

the handle outside the method seemed to be common.

I think the problem here is that you than have to pass the handle to all methods in the class even to private members, since everything the class does relies on the proper dataset context. A better way would be to instantiate the business entity with the proper handle in the constructor. Than you would scope the class instance to the dataset. Since the business entity is designed as a singleton, you can't do that in this case.

I don't think it's a good idea to set some context in a class instance, let it do several things and than set the context to something else. Sooner or later you will shoot yourself in the foot...

Posted by Thomas Mercer-Hursh on 28-Apr-2007 11:06

When I point to inheritance, I'm not necessarily pointing to PDS. To me, pre-10.1A, a PDS was a sort of proto-object. Not a real object because it had only a limited set of behavior which it could incorporate, but one that had some useful default behaviors built-in, as if inherited from a common PDS ancestor object. But, like anything that was designed without the context of OO, it isn't really an object and that gets in the way of it being as cleanly useful as it might be. One of the things which I hope PSC is looking at for 10.1C and 10.2 is how to bring PDS and classes more in line. Wouldn't it be nice, for example, to be able to declare a class with a set of properties and have a way to declare that those properties were all members of a buffer and then to be able to use WRITE-XML(), READ-XML() and TRACKING-CHANGES with that buffer? And, then to be able to define another object which contains a temp-table of those objects and do the same thing with it?

Posted by Thomas Mercer-Hursh on 28-Apr-2007 11:07

I think the root problem here is the Relational-Object Hybrid model being used. If one uses true entity objects, most of this nonsense goes away.

Posted by Admin on 28-Apr-2007 16:48

The handle seems to be define in the class. The same handle (name) is assigned in the businessEntity. Is this how it recognises the PDS that it is dealing with? You can't do loadData()?

instantiate the business entity with the proper handle in the constructor

As I'm not familiar with OO can you provide a small example of what you are describing?

Posted by Admin on 28-Apr-2007 16:57

Relational-Object Hybrid model

Is the Progress database able to efficiently cope with true object orientation or will the Progress version only be providing a hybrid? I had a comment from an OO person that they didn't think that the Progress database was designed for OO i.e. it is a relational database.

Posted by Thomas Mercer-Hursh on 28-Apr-2007 17:45

99% of all OO systems store their data in a relational database. RDBMS are just too fast, too mature, and too easy to use for reporting for it to be otherwise. Thus, virtually all OO systems deal with an object-relational mapping at the interface to the database. Data is read from one or more tables and marshalled into an object and then when a new object or a modified object returns it is de-marshalled and put into tables.

The difference in the ROH model in the new whitepapers is that this is carried one step further, i.e., data from the database is marshalled, not into an object, but into a PDS and the PDS is passed to the business layer where a so-called business entity object is there to receive it and provide it with logic. I say "so-called" because to the rest of the world a business entity is an entity object which encapsulates both data and logic. The argument in favor of this ROH strategy is that ABL is full of these nice, familiar, powerful verbs for handling relational data so we ought to keep the data in that form. One of the big failings of this approach, however, is that one then ends up with snippets of ABL data access code scattered about instead of having those bundled in with the data.

I will have a whitepaper coming out on this in a couple of days.

Posted by Admin on 30-Apr-2007 03:06

The handle seems to be define in the class. The same

handle (name) is assigned in the businessEntity. Is

this how it recognises the PDS that it is dealing

with?

That's correct.

You can't do loadData()?

Sure you could, but there are two things you have to keep in mind:

1) once you switch from handle to static buffer, you need to do an extra internal call. This call is necessary to cast the handle to a static buffer/temp-table/prodataset. That's way you often see this pattern:

  • something X passes the dynamic handle to procedure/class Y

  • procedure/class Y does an internal call, passing the handle to a static buffer, so you can do simple programming again like "FOR EACH", "FIND", etc.

2) once you pass a handle, you should pass the handle consistently. You can't do "FOR EACH' and "FIND" on a static buffer, unless you do the thing above.

instantiate the business entity with the proper

handle in the constructor

As I'm not familiar with OO can you provide a small

example of what you are describing?

It's like starting a persistent procedure with a parameter. The incoming parameter is necessary for the procedure/class to do it's normal work. In this case the class can only do something sensible when it has a valid PDS(-handle/context). In these scenarios it's common to provide this context when you create the object instead of passing the context on every call. Does that make any sense? Pseude code example, compare:

To:

In the case of AutoEdge you will see this pattern:

In version one you can't change the file path, which makes perfect sense for the lifetime of the "File"-object. In case two you pass the path argument to all methods, so your object is reentrant (http://en.wikipedia.org/wiki/Reentrant). In case three you set a temporary context, which makes your object not reentrant.

Posted by Admin on 30-Apr-2007 03:15

Wouldn't it be nice, for example, to be able to declare a

class with a set of properties and have a way to

declare that those properties were all members of a

buffer

This is the transparant persistence mechanism we discussed earlier (sometimes related to DataXtend). In your earlier replies you didn't think this was very valuable, since it was very easy to map this by code. Did you change your mind on this one?

Posted by Admin on 30-Apr-2007 03:22

Is the Progress database able to efficiently cope

with true object orientation or will the Progress

version only be providing a hybrid? I had a comment

from an OO person that they didn't think that the

Progress database was designed for OO i.e. it is a

relational database.

What kind of backend is this OO-person using, MsSqlServer, Oracle, MySql? If so, than he's probably doing OO against a relational database as well

But the issue with OO and relational databases is the so called "object relational impedance mismatch", http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch. You will get into the same issues with plain ABL when layering your application.

Posted by Thomas Mercer-Hursh on 30-Apr-2007 11:02

And, the question being, whether or not re-entrancy

is desirable or not. To be sure, statelessness is often cited as a

virtue in contexts such as appserver agents, but I don't know that

this would apply to, for example, a local logging object, which is

what your file example might correspond to. It might, however,

apply to some local service object which could be called from

multiple other objects ... assuming, of course, that one bother to

make it a singleton, otherwise it is irrelevant.

Posted by Thomas Mercer-Hursh on 30-Apr-2007 11:08

There must be some fuzziness in communication here. Oh, wait a minute ... I do have this memory of your advocating automatic mapping to and from the database, so I now see why you mention DataXtend. I was confused there for a moment because I didn't see how DataXtend had anything to do with what I was talking about since DataXtend is a database to database tool.

No, I'm not talking about anything related to the database here at all. All I am talking about is a mechanism by which we might be able to take advantage in classes of some of the useful properties of temp-tables and ProDataSets. It is my understanding that, under the skin, these variables are organized into a sort of buffer anyway, so this might require nothing more than a way to give us a handle to that buffer. All I am looking for here is a way to achieve tracking changes and read and write XML in an object which consists of a set of properties.

Posted by Admin on 01-May-2007 02:34

tracking changes

Not exactly sure what you mean by tracking changes (as they have a specific function in a PDS to turn on/off the tracking of buffer changes). However, I'd like to be able to easily determine the current-changed fields and values on a save-row-changes.

Now that the save is all hidden in the save-row-changes to establish the clashes one must have a copy of the buffer or dataset and re-read through the records to get the fields that were in contention. Currently I do this at the client as I have a copy of the PDS that I applied the collect-changes against. So when the save has a current-changes clash I compare the client PDS to the returned PDS so that I can advise the user the rows that had an issue and the fields that caused it showing what they saw, their change value, and the current db value. Otherwise the user has to try and discover what was changed and remember their change.

Possibly part of problem is due to the dynamic buffer-compare returning a logical rather than the fields that don't compare. If not that's an issue anyway. If you want a logical you could test the returned value > ''.

Rant, Rant

Anyway that's my complaint for today, and with that I'll be off!

Posted by Thomas Mercer-Hursh on 01-May-2007 11:06

I mean that I would like the same functionality for objects as one has for PDSs. Same functions all the way across the board.

And yes, I tend to agree that there are ways to make it better for both.

This thread is closed