OOOERA

Posted by Thomas Mercer-Hursh on 21-Mar-2007 14:52

While I recognize that there were only 37 other people who attended the seminar this morning on OERA Update: Class-based Implementation and it will be about a week before the recording is posted ... and I don't know how many people will choose to sit through an hour and a half recording ... I would like to throw out some comments in reaction to this presentation for discussion.

Naming standards

We have talked about various options for naming standards before and, of course, there is always going to be a certain amount of personal preference involved here, and, yes, I recognize that any naming standard used in OERA whitepapers is just for the purpose of illustration and is in no way a model for a production system, but I see no reason that the naming conventions used can't be some flavor of one that would be "real world". Having things named things like samples.beCustomer just doesn't cut it when there is more than a handful of files. It really should be something like com.progress.autoedge.ar.customerBE instead, i.e., using the standard tla.domain.application.module structure and postfix type identifiers so that all files related to customer are grouped together visually. One can quibble about the details, but something like this would be a good example to get people thinking in the right direction.

Singletons

In the structure presented, there was a base class called ComponentBase which all components derived from and which included constructor code which used the object walking technique I described here http://www.oehive.org/PseudoSingleton to attach to a session services manager. Why not just New the session services manager and include this type of code in the structure I described to insure that this manager remains a singleton. Then, when the singleton keyword is implemented (bet you didn't know that it is already recognized) one can just change the one service manager class and everything else stays the same?

PDS instead of entity objects

Yes, I know that John is presenting this as an advantage of OOABL, but I think that if we are going to illustrate OO, we should actually do OO. Having include files to share dataset definitions between business entity objects and data access objects is very .p, if you know what I mean. The mis-named business entity object should be renamed into the manager or facade object it really is and we should get real entity objects created from the data access layer so that this complexity is actually encapsulated.

Include files

I really don't like having numerous include files and preprocessor definitions scattered all over my class files. Like I said, very .p.

validateData()

Why would a validateData() method not have a return type? What is it supposed to do when the data doesn't validate?

Class files as first class citizens

So, how long are we going to have to wait before we can instantiate a class file as the top level component, including on AppServer? No, defining .p facade procedures is not a positive feature. And, no, that doesn't mean that I am advocating locking down an AppServer or exposing inappropriate implementation details.

getSIPath()

OK, so we don't have abstract methods yet, but why define a nonsense method body that is always overridden instead of just putting this in an interface?

templates vs generator

I readily agree that there is a lot of repetitive code in an application, but I would much rather see PSC move in the direction of generating this code than using or advocating templates based on include files and preprocessor statements. Someone asked "Are there tools on the market which produce ABL out of UML?". The answer might be no today, but the tools and capability is certainly there and we could all benefit by exploiting that capability.

All Replies

Posted by Admin on 22-Mar-2007 03:40

While I recognize that there were only 37 other

people who attended the seminar this morning on OERA

A practical issue: some people live in another timezone

validateData()

Why would a validateData() method not have a return

type? What is it supposed to do when the data

doesn't validate?

What happens when you don't call it? Or will is be always called implicitly as well as a double check? How to avoid expensive operations?

Class files as first class citizens

So, how long are we going to have to wait before we

can instantiate a class file as the top level

component, including on AppServer?

A stateless AppServer requires a non-persistent procedure call, so you won't get bound to a specific AppServer session. So I'm in favor of exposing the AppServer API as non persistent procedures... It's a facade you create, making you aware that you draw a line between the internal object model and the external one. You can always generate this layer by annotating your internal class methods.

OK, so we don't have abstract methods yet, but why

define a nonsense method body that is always

overridden instead of just putting this in an

interface?

An interface means that you will have to implement the entire interface, while an abstract class gives you the luxury of having a base implementation.

"Are there tools on the market which produce ABL out

of UML?". The answer might be no today, but the

tools and capability is certainly there and we could

all benefit by exploiting that capability.

You could use the Eclipse template based approach instead of includes.

Posted by Thomas Mercer-Hursh on 22-Mar-2007 11:23

This was

an imitation of an abstract method, not an abstract class. E.g.,

In order for this to be of any use whatsoever, it must be

overridden in a subclass. My point is that it can be placed in an

interface without the implementation details and that makes it

cleaner. One can implement multiple interfaces, so in the worst

case one could even put this method alone in its own interface.

Posted by Thomas Mercer-Hursh on 22-Mar-2007 11:25

Here is the link to where the recording will be posted, but it isn't there yet.

http://www.psdn.com/library/entry.jspa?entryID=2233

Posted by john on 22-Mar-2007 12:14

... I would like to

throw out some comments in reaction to this

presentation for discussion.

Thomas, thanks for attending and for your feedback. A few responses...

Naming standards

...

Having things named

things like samples.beCustomer just doesn't cut it

when there is more than a handful of files. It

really should be something like

com.progress.autoedge.ar.customerBE instead, i.e.,

using the standard tla.domain.application.module

structure and postfix type identifiers

This is a good point, and although as you point out, it's easy enough to set your own standards, our own examples can certainly move in a direction of a set of standards that will scale better than some that have been used.

Singletons

... Why not just New the

session services manager and include this type of

code in the structure I described to insure that this

manager remains a singleton. Then, when the

singleton keyword is implemented (bet you didn't know

that it is already recognized) one can just change

the one service manager class and everything else

stays the same?

Creasting an instance of an intermediary class to do the work of locating or starting the "real" singleton class is an alternative to what was shown, but I'm not sure where the extra overhead of all of these intermediaries is justified. Whether the search is done in a super class, as shown, or delegated in some other way, or done as in your example by NEWing an intermediary, the result is pretty much the same, but as noted, the overhead of all of the intermediaries -- and of later passing all requests through this extra layer -- would concern me. BTW, the product group is of course always working on new features including extensions to our support for classes, but I would not bet at this time that the SINGLETON keyword per se will actually be used in the language; that may remain to be seen.

PDS instead of entity objects

Yes, I know that John is presenting this as an

advantage of OOABL, but I think that if we are going

to illustrate OO, we should actually do OO.

But why? The whole thrust of the support for classes in ABL is to allow a mix-and-match not only of (as time goes on, mostly older) procedural code and (mostly newer) class-based code, but also to allow the same kind of relational definition and access to data while taking advantage of the useful practical feratures of classes (strong typing, etc.) I don't understand why "doing OO" per se is a goal. My goal was to illustrate the use of classes in ABL, not to illustrate OO for its own sake.

Include files

I really don't like having numerous include files and

preprocessor definitions scattered all over my class

files. Like I said, very .p.

Well, I would share the goal of removing include files wherever possible. In this sample implementation, there are two principal ones: one is the dataset definition (and as shown, its constituent temp-tables in their own nested include files, if you want to break it up that way). This is used in two separate class hierarchies, one for the BE and one for the DAO; there could be other ways of doing this, so the DataSet definition would inherited by both sides and the include file wouldn't be needed. The other is the boilerplate code to support the individual Service Interface procedures. Since (in today's world, at least) this has to be a 'simple' .p (not a persistent procedure, not a class), putting the common code into an include file is -- as you say -- "very .p", but that's what the thing is. It could have been delegated out to in another running procedure or class, as an alternative, but in the sample it is in fact included in a procedure.

Class files as first class citizens

So, how long are we going to have to wait before we

can instantiate a class file as the top level

component, including on AppServer?

I can't answer that, but I would still question whether that lack makes the sample shown an unattractive workaround as opposed to a perfectly reasonable solution. Given the basic facts that (a) you don't want to bind the AppServer session by instantiating either a persistent procedure or a class, and (b) that this top level of code on the server side needs to be able to invoke various standard support services, you don't really want the client invoking the real working classes on the server directly anyway. This is not to say that the lack of being able to NEW a class on the sever from the client, or run an entry point in an already running procedure or class, is not important to address in a future release of the product, but that doing this would still not become the norm in most Service Request operations of a well-constructed application.

getSIPath()

OK, so we don't have abstract methods yet, but why

define a nonsense method body that is always

overridden instead of just putting this in an

interface?

You still need an empty method body (or one that signals an error if it's actually run) to keep the compiler from complaining about the method reference in the super class with no definition.

templates vs generator

I readily agree that there is a lot of repetitive

code in an application, but I would much rather see

PSC move in the direction of generating this code...

In 10.1B it is straightforward to extend OE Architect menus to invoke tools that can be written in ABL. It is also easy to tell the DataSet builder to export the data that results from building temp-tables and DataSets into a database rather than as XMI. Given this, it then becomes easy to write your own tools in ABL to generate all the standard components of an application from template files (using IMPOR, REPLACE, and PUT). A description of this is planned for the near future as a further set of samples and another paper for PSDN and -- time permitting -- as an extension to the presentation for Exchange. We (i.e, OpenEdge)are not likely to standardize on a single set of "official" templates because there are so many variants that people will want to have; given the raw data in a usable and familiar form, filling in templates from ABL is pretty trivial.

Once again, thanks for your comments.

Posted by Thomas Mercer-Hursh on 22-Mar-2007 12:47

That would be appreciated ... it

helps to set good examples at every opportunity. As to the

singleton issue ... I suppose that if one is never going to

implement the keyword which is already there, then it might be a

moot point, but the attraction I see in the technique I used is

that all components except the intended singleton can remain the

same when the keyword is implemented and, in fact, however it is

implemented as long as it is a property of the class. Yes, in your

approach one just changes the base component class and recompiles

and that is certainly better than an include or something, but it

does mean that any impact analysis or whatever that one has is

changed application wide. John, I think this is an area where we have very

different opinions ... so I'm trying to advocate mine. It is

obviously a good thing to allow people to mix and match, but you

make it sound like a goal, i.e., the preferred target. In fact,

your summary in the webinar seemed to be making that point

explicitly that using this approach avoided the traditional OO

coupling issues with the RDBMS. Obviously, you have a point and one

can take that approach, but I think it is not as good an approach

as using real entity objects because that is the way that one can

encapsulate data and behavior in a single class. This also gets one

away from this messy business of having to have include files

references scattered around the application so that a common

dataset definition can be shared. There is just no way I see this

as good encapsulation. I am more than happy for you to write a

whitepaper showing people who might like that sort of thing how

they can mix and match ... it might be more comfortable for some

people and it might help some to interface with the rest of their

application. But, if we are looking to creating a model for a

reference architecture, it shouldn't be that way. And, for the

include files, it is really the same issue. It is poor

encapsulation and there are better, more OO ways of accomplishing

the purpose. On the issue of running classes instead of procedures,

see my remarks above in response to Theo. I just see no reason why

I can't start an application with a class and why I can't have

facade classes on the AppServer. It is only a part of the problem,

but it should be a trivial part of the problem. It should have been

in 10.1B. Why can't you just define the method in an

interface? I can't get very excited about writing code generators

in ABL ... it is not one of the kinds of things that ABL is good

at. I agree that one has to be cautious about having PSC come up

with too much of the mechanism because it is unlikely that I'm

going to like what you build, e.g., T4BL, BUT there is one big

possible exception to this and that is if the mechanism is open and

based on industry standards. In fact, this is a perfect opportunity

for an open source project with PSC involvement, not POSSE like,

but something which actually belongs to the community. Frankly, I

can't see why the technology should be anything other than MDA

because that would allow us to leverage off of existing tools like

EA. PSC could help get this started and then let the community take

it where the community wanted. It might well result in multiple

flavors and branches, but what's wrong with that?

Posted by Admin on 23-Mar-2007 03:19

On the issue of running classes instead of

procedures, see my remarks above in response to Theo.

I just see no reason why I can't start an

application with a class and why I can't have facade

classes on the AppServer. It is only a part of the

problem, but it should be a trivial part of the

problem. It should have been in 10.1B.

Being able to execute a static class method on the AppServer might be a reasonable option, but is this really any different than executing a non persistent .p? But remotely instantiating a class instance and in the next roundtrip invoke a method on it, doesn't sound like a good idea, since the AppServer has to manage the lifetime of the instantiated object. It's similar to launching a persistent procedure and invoking internal methods on it. It means you will be bound to the AppServer.

But maybe you're concerned about the fact that OOABL'ers have to learn about OOABL and procedural programming when they want to do something with the AppServer. Here I would say that tooling is the answer: the proxygen tool could generate the .p wrapper (stubs) for you as well as the client side proxies.

Posted by Admin on 23-Mar-2007 03:22

Creasting an instance of an intermediary class to do

the work of locating or starting the "real" singleton

class is an alternative to what was shown, but I'm

not sure where the extra overhead of all of these

intermediaries is justified.

That was my response to Thomas as well in "the war against shared variables"-discussion http://www.psdn.com/library/thread.jspa?messageID=7250᱒. Especially since there is no garbage collection in the OOABL, so you have to construct and desctruct the classes yourself, which makes it very easy to leak memory.

Posted by Admin on 23-Mar-2007 03:54

Include files

I really don't like having numerous include files

and

preprocessor definitions scattered all over my

class

files. Like I said, very .p.

Well, I would share the goal of removing include

files wherever possible. In this sample

implementation, there are two principal ones: one is

the dataset definition (and as shown, its constituent

temp-tables in their own nested include files, if you

want to break it up that way).

The issue here basically is: to create entity classes that encapsulate data (buffers & temp-tables) and that expose them as properties or to pass temp-tables around explicitly (data driven architecture). You see that .Net developers are struggling with the same architectural design decission: to use (typed) datasets or to use simple objects in the internal object model. For user interface frameworks and databinding its often easier to work with generic containers that have a known interface to reduce dynamic inspection of objects.

There are different aspects for each physical tier:

- usage of database data in the server

- transformation of deserialized client data on the server

- usage of deserialized server data on the client

In the server tier I don't think the OOABL is suited for wrapping each customer row instance by a customer entity thereby creating lots of OOABL class instances. Wrapping a resultset (temp-table) might be a more pheasable approach in this environment, since it will reduce the number of ))ABL class instances compared to the entity approach. But what did you achieve with this approach? A temp-table is already a typed definition. And putting behavior in a resultset wrapper might not be that meaningfull. Sure, you don't have to distribute the temp-table definition in an include file, but you still have to distribute the class declaration in an include file. The class declares a property for most of the temp-table fields, so what's the gain here?

Statement: "OOABL works best (read: most efficient) in a data driven architecture, where data is decoupled from behavior and data is passed through the tiers in temp-table format."

Before we can continue with OO...OERA we should have a consensus on this statement.

Posted by Thomas Mercer-Hursh on 23-Mar-2007 11:16

What is the functional difference between a non-persistent .p and a facade class that does its work in the constructor?

Posted by Thomas Mercer-Hursh on 23-Mar-2007 11:23

Whether the search code is in the base component class or in the facade, the same work is being done. The only "extra" overhead in my approach ... overhead which only exists until we are provided with true singletons ... is the extremely small class instance for the facade, but note that the search code in it then no longer has to be in the base component and so that code is removed from every component. Sure seems pretty close to a wash to me, unless there is some huge overhead for having a class instance that doesn't apply to a .p. Given that we don't have multi-threaded sessions, just how many components do you expect to have instantiated in any case?

Also note, with the pseudo-singleton approach, the first component instantiated in the session will automatically create the session manager instance. With the base component approach, the session manager has to be created externally before the first component is instantiated.

Posted by Thomas Mercer-Hursh on 23-Mar-2007 11:42

Apparently, we

don't have a consensus on this principle. In fact, I would say that

the whole notion of decoupling data and behavior was antithetical

to the principles of OO. You have every right to advocate some

hybrid architecture like this if you want, but we really ought to

invent a new name for it to distinguish it from the principles that

have long been associated with OO. Maybe Pseudo-OO (POO) or

Hybrid-OO (HOO)?

Posted by Thomas Mercer-Hursh on 23-Mar-2007 12:34

I know .... Data-Object Hybrid ... DOH!

Posted by Admin on 23-Mar-2007 13:33

What is the functional difference between a

non-persistent .p and a facade class that does its

work in the constructor?

Two remarks:

1) even when you put all the logic in the constructor, you still instantiate a class instance. So after initialization, you have loaded a class instance and you need a new server call to delete the instance. A non-persistent procedure doesn't suffer from this issue, since you call the server, let the .p do it's work, the result is serialized and the server is released for new requests.

2) you should use the constructor to initialize a class instance and leave it in a valid state. Real work should be done in a method called from the outside. When a class is complex to setup, you can refactor that code using an assembler pattern. A constructor with lots of logic is mostly an indication of a badly designed class.

Posted by Thomas Mercer-Hursh on 23-Mar-2007 13:43

One remark ... other OO environments get the job done without having to step outside the OO paradigm. We should be able to do so as well.

Posted by Admin on 23-Mar-2007 13:59

Statement: "OOABL works best (read: most

efficient) in a data driven architecture, where data

is decoupled from behavior and data is passed through

the tiers in temp-table format."

Apparently, we don't have a consensus on this

principle. In fact, I would say that the whole

notion of decoupling data and behavior was

antithetical to the principles of OO.

This is not my opinion nor my preference. It's my question to John (PSC). What's the correct usage of OOABL, since ABL is a data centric and transactional language with its buffers, buffer scoping, temp-tables, etc. Is the OOABL suitable for creating (lots of) entity classes knowing the internal implementation of OOABL and missing a garbage collector?

Posted by Thomas Mercer-Hursh on 23-Mar-2007 16:32

I don't know that I think it

necessarily should be PSC's job in general or John's in specific to

answer the question. Their track record on real world best

practices isn't exactly stellar. I don't see any reason why we

can't be the ones to determine how it is that we want to use the

language and to ask them to correct any deficiencies which get in

the way of our using it the way we think it should be used. If

there is some real performance issue with using entity classes,

let's experiment and see what it is. As a stupid little exercise I

created a class with four properties, which, of course, includes

the implicit getter and setter methods for those properties. That's

about 2.6K compiled. If I define a .p with a four column, one row

temp-table and procedures to set and get the values, that is about

6.9K ... so far we are ahead with the class approach.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 12:43

The webinar recording and the PPT of the slides have now been posted

http://www.psdn.com/library/entry.jspa?entryID=2233

Posted by Admin on 26-Mar-2007 14:31

The webinar recording and the PPT of the slides have

now been posted

http://www.psdn.com/library/entry.jspa?entryID=2233

Thanks Thomas, this presentation answers most of my questions.

"...

+ ABL does not specifically support true objects as data representations

- No collections

- Relational joins between related data elements

- Cannot pass an object as a parameter

+ But – mapping relational physical data to objects and back is avoidable overhead

..."

Hehe... no real support for using entity classes, but who cares, "mapping... is avoidable overhead"

It really feels like ABL didn't advance overtime when I look at constructs like this:

{&PREFIX-NAME}{&COMPONENT-NAME}{&PACKAGE-NAME}{&PREFIX-NAME}{&COMPONENT-NAME}{&PREFIX-NAME}{&PACKAGE-NAME}{&PREFIX-NAME}{&COMPONENT-NAME}{&PREFIX-NAME}{&COMPONENT-NAME}{&METHOD-NAME}{&PARAM-LIST}

What on earth are you going to save with this include? It looks like the 3rd algorythm in launching the spaceshuttle. Do you understand what's going on when you don't have the include and see this in your business logic:

{templates/fetchdata_params.i}{templates/apicode.i}

This presentation tells me OOABL will never be a serious step forward in productivity. It has it's place, sure, but this is not the breakthrough I was hoping for...

Posted by Thomas Mercer-Hursh on 26-Mar-2007 14:38

The collection piece I think is irrelevant since I have already demonstrated that one can implement them in ABL code.

The cannot pass as parameter ... is really only across the wire and doesn't actually bother me much since it appears that marshalling and demarshalling via XML is possibly faster anyway.

The real core issue here is the OO vs DOH approach. I guess it is pretty clear where I stand on not seeing the hybrid approach as a virtue.

Posted by Admin on 26-Mar-2007 14:41

It feels like ADM with new wrappers.....

Posted by Thomas Mercer-Hursh on 26-Mar-2007 14:48

But, I don't think the model

presented in this presentation is the only one possible with OO

ABL. I think that a better implementation can be done; this just

isn't it.

Posted by Admin on 26-Mar-2007 14:54

But, I don't think the model presented in this

presentation is the only one possible with OO ABL. I

think that a better implementation can be done; this

just isn't it.

Maybe. But in that case you will probably end up fighting the language, something we learned the hard way with the ProDataSet for instance.

This is for instance illustrative:

IF NOT VALID-OBJECT(sessionObject) THEN

FatalError( "Session management service not found!").

ELSE

servicemgr = CAST (sessionObject, service.servicemgr).

END CONSTRUCTOR.

METHOD PUBLIC VOID FatalError (pcMessage AS CHAR):

MESSAGE "Fatal error!" SKIP pcMessage VIEW-AS ALERT-BOX ERROR.

END.

What do you want with a messagebox on the server? I want to throw an exception here. I can't of course, I can do a "RETURN ERROR", but that hasn't proven to be a very robust construct.

Posted by john on 26-Mar-2007 15:29

John, I think this is an area where we have very different opinions ... so I'm trying to advocate mine. It is obviously a good thing to allow people to mix and match, but you make it sound like a goal, i.e., the preferred target. In fact, your summary in the webinar seemed to be making that point explicitly that using this approach avoided the traditional OO coupling issues with the RDBMS.

"Mix and match" can mean a couple of things here. Where the mixing of classes and procedures as types of ABL source files is concerned, I would suggest that in new development at least, classes have pretty much supplanted at least persistent procedures as a programming mechanism (this is not necessarily a view that is universally held -- yet). There are a few specific feature areas such as PUB/SUB that do not have their equivalent in classes yet, but those are being addressed. But of course, an unavoidable part of just about any real situation is migration of at least a part of an existing application to a changing world, and the fact that you can do development in classes and intermix that flexibly with existing procedures certainly helps with the migration strategy.

But more fundamentally, the debate seems to be over classes as a programming tool mixed with a relational view of the data they manage. While we can and no doubt will continue to have divergent views of whether the mix is really advantageous, I would restate the argument that treating your data primarily as a relational object managed within a class is legitimate and in most business situations an advantageous over having to bridge the object-relational-mapping gap between the way the data is in most cases stored and how your programming logic deals with it. And designating a specific object such as a Business Entity to be responsible for all access to a specific set of data couples the data to the logic, though it remains true that the data can be extracted as a unit from the class that manages it. And much of the power of the language as a tool for manipulating data is well worth keeping and taking full advantage of, which is directly tied to viewing it as rows and fields within tables and not just as properties with getters and setters.

Posted by john on 26-Mar-2007 15:31

Why can't you just define the method in an interface?

If there's no implementation of the method in the super class the implements the interface, the compiler will complain; so lacking true abstract methods, you currently have no choice but to put a 'dummy' implementation of the method into the super class and then put the meaningful implementation into the subclass.

Posted by john on 26-Mar-2007 15:36

I can't get very excited about writing code generators in ABL ... it is not one of the kinds of things that ABL is good at.

I would agree, but there are at least two levels of tools we could be talking about here. What I was more immediately talking about is a fairly simple tool that can do substitution for basic information like or , as a simple replacement in a template source file. This is simple enough to do in ABL and can help you create a large part of the support skeleton for data definitions that come out of something like the OEA DataSet builder. Doing true application logic generation using an MDA is a whole different level of generation; heaven knows that effective tools have been built and are being built in ABL, but I would not argue that it's ideal for that purpose. There are initiatives going on within the community in the application generation area that we can all hope to be made more widely available -- that will not be for me to say or to undertake.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 15:38

I don't think that one ends up needing to fight the language ... just decide what parts of it to use and how to use them. To be sure, there are some additional language elements which would be useful, but I think we have enough to do real work.

In that code sample, I expect that the response will be that this is just preliminary stuff to illustrate the principles and that is why it has a message in it, but I agree with you that it would be a more acceptable example without stuff like that.

An exception mechanism would certainly be nice ... and I mean one more like this http://www.oehive.org/ExceptionClass than what we have now.

But, which said, I think one of the ironies of putting this stuff in the base component class instead of in the session manager per my technique is that it relies on the session manager getting started before any components or one gets an error that needs to be handled. If handled as a pseudo-singleton, it just starts the session manager if it isn't there already.

Posted by john on 26-Mar-2007 15:45

In terms of context, I think we can distinguish three general cases of usage pattern:

1) Use of individual entities, one at a time;

2) Use of closely related entities, but which are often processed individually; and

3) Use of groups of entities treated primarily as a set.

The first corresponds to something like maintaining a customer record; the second to something like updating the order lines associated with an order; and the third to reporting, particularly on aggregate data.

...The third case might be argued in favor of wrapping a temp-table or PDS, but only if one can work out reasonable methods for accessing the properties of the elements.

Maybe so, but I think the third case is the one that most often represents the work to be done in a business application. Given a relational structure (coming out of the database in any case) that represents -- whatever -- a customer and its multiple addresses and its contact information and its credit history -- the whole thing typically needs to be dealt with as a unit, not just as a collection of entities. This is what the PDS is all about as a complex relational data object. The first case of expressing what amounts to a single row as an object with properties and getters and setters is more manageable, but perhaps not the more typical use case. And of course the language traditionally provides a very effective way to access the properties of the elements: "IF table.fieldname = ..." True, the language does not enforce a restriction against cheating by accessing a table or field outside of the class that has been defined to manage the data, but that has to be a part of the development discipline that underlies any implementation of an application architecture.

Posted by john on 26-Mar-2007 15:48

What is the functional difference between a

non-persistent .p and a facade class that does its

work in the constructor?

Two remarks:

1) even when you put all the logic in the

constructor, you still instantiate a class instance.

So after initialization, you have loaded a class

instance and you need a new server call to delete the

instance. A non-persistent procedure doesn't suffer

from this issue, since you call the server, let the

.p do it's work, the result is serialized and the

server is released for new requests.

Well, of course, you can end the constructor in the procedure-turned-class with the statement DELETE OBJECT THIS-OBJECT, which makes the class pretty much identical in its behavior to a 'traditional' .p (with no internal entry points) -- but what's the advantage of expressing it as a class?

2) you should use the constructor to initialize a

class instance and leave it in a valid state. Real

work should be done in a method called from the

outside. When a class is complex to setup, you can

refactor that code using an assembler pattern. A

constructor with lots of logic is mostly an

indication of a badly designed class.

Agreed, so why can't things like this remain procedures?

Posted by john on 26-Mar-2007 15:54

This is not my opinion nor my preference. It's my question to John (PSC). What's the correct usage of OOABL, since ABL is a data centric and transactional language with its buffers, buffer scoping, temp-tables, etc. Is the OOABL suitable for creating (lots of) entity classes knowing the internal implementation of OOABL and missing a garbage collector?

I hope no-one thinks it is up to me to supply the one true official answer to the question. I will only say that when I suggested in the presentation that the combination of procedural and class-based capabilities in the language -- even as it stands today, with some desirable features still lacking -- is an advantage rather than a workaround for serious defects, I was quite serious (Theo's derision posted later in this thread notwithstanding). Other languages are moving in the direction of supporting better relational data access in otherwise object-oriented languages, and we think they've got some catching up to do. I don't think that being wholly or purely object-oriented is going to be a primary concern to most developers of business applications.

Posted by john on 26-Mar-2007 16:00

It really feels like ABL didn't advance over time when I look at constructs like this:

servicemgr:sessionContextService:setContextId(pcContextId).authenticationService = servicemgr:authenticationService.authenticationService:loadPrincipal(). = CAST (servicemgr:startService ('.', ""), .). :().

OK, maybe I made the mistake of exposing a very short-term approach too explicitly, but it is easy enough to do simple string substitution in this kind of template (far short of anything in the way of complex code generation) to make this much more readable, instead of using preprocessors. And mostly, this type of boilerplate code is not what you're inspecting when you're looking at the finished application, precisely because it is the part that provides standard set-up behavior in a consistent and mechanical way.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:01

There are a few

specific feature areas such as PUB/SUB that do not have their

equivalent in classes yet, but those are being addressed. Yes, this is a

very different topic than the one above because it is about what

good OOABL programming is like in its best form. In terms of the OO

vs DOH, or ROH if you prefer, I think it is important to think in

terms of the "levels" I cited earlier. When one is dealing with

what amounts to a single entity -- one customer, one item -- I

don't see what advantage you can cite for treating this as a row in

a table. Oh, I suppose that creating a one row temp-table means

that you can use the WRITE-XML method to serialize the object, but

that seems like a lot of overhead for what gets accomplished. It

seems to me that at this level one has on the one hand a very clean

encapsulation of data and behavior in the OO entity class, whereas

in the model you are proposing we end up with poor encapsulation,

necessitating putting temp-table or PDS definitions as includes to

multiple files. Moreover, about this great burden you talk about

for doing the OO to relational mapping ... not only is this a part

of the code which is highly susceptible to generation, but think

about how hard it really is. If one has control over the data

structure so that one row in one table is equal to one entity

object, then the mapping is trivial, no matter how you do it. If on

the other hand an entity object corresponds to a combination of

information from a dozen different tables ... as might be the case

with an order, for example, then there is work to do whether one

represents things relationally or as an object. Why is building an

object, possibly even one containing temp-tables when that is

appropriate, all that more difficult than building a 12 table PDS?

To me, real encapsulation is one of the key elements of the

advantage of OO and you don't have that when you are sticking

include files all over the place.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:04

Well, unless you put the

"implements interface" on the class level where it is going to be

actually implemented and don't put it on the superclass.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:06

Both of which really already

exist. The simple one is resented by things like the JET technology

in Eclipse and the complex one is represented by MDA. We don't need

to write the tools, we just need to come up with the templates and

transforms.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:21

It seems to me highly typical, especially if you consider that it

is common practice in database design to define a lot of code

tables so that a record like an item contains a whole bunch of

short codes that indicate properties of the item and these codes

are both short and controlled by making them a reference to another

table and one has the option of putting additional information in

that code table. In use, however, what do you really want? Do you

want a PDS with one item record and 20 other tables for the codes

each of which also has only one record in it? Or, do you simply

want to de-normalize into a single flat record, at which point we

have a perfect setup for a very simple entity class with no complex

internal data structures.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:24

Well, for one it means that we have more awkward coupling to OO

modeling tools. The real question though is why should they have

to remain procedures? Why not simply enable instantiating a class

as the top level action and then let people make the choice? This

can't really be difficult.

Posted by Thomas Mercer-Hursh on 26-Mar-2007 16:26

Well, based on the discussion

so far ... I hope not! Perhaps it should be.

Posted by Admin on 27-Mar-2007 01:22

>> 3) Use of groups of entities treated primarily as a

>> set.

>>...

Maybe so, but I think the third case is the one that

most often represents the work to be done in a

business application.

I really doubt that this is the typical use case in a business application. We typically poke around in different tables, we don't pull in entire hierarchies. And this explains why it's hard to scope ProDataSets: you probably need access to all those tables, but you need access conditionally.

Given a relational structure

(coming out of the database in any case) that

represents -- whatever -- a customer and its multiple

addresses and its contact information and its credit

history -- the whole thing typically needs to be

dealt with as a unit, not just as a collection of

entities.

In case of "customer maintenance" you're right, but in all other cases you're wrong. During order entry you might want to know the default delivery address of the customer only. Finding the default delivery address is a business rule. At a later stage you might want a list of all the addresses of the customer, so the sales entry clerk can pick one.

This is what the PDS is all about as a

complex relational data object.

As long as everything is connected to each other, so that the consumer of the data (client) doesn't have to join this locally, it would be OK. But what you typically see is that the database tables are stored 1:1 in temp-tables, ProDataSets are defnied and the client ends up joining these tables at the user interface level. This means it needs to have knowledge of the physical database schema, since it needs to know which tables to join to get the relevant denormalized data.

The first case of

expressing what amounts to a single row as an object

with properties and getters and setters is more

manageable, but perhaps not the more typical use

case.

It can also offer "lazy loading", since you're not solely talking to a data structure, but you're talking to something "smart" that is capable of asking for additional data on demand.

Posted by Admin on 27-Mar-2007 01:28

Two remarks:

1) even when you put all the logic in the

constructor, you still instantiate a class

instance.

...

Well, of course, you can end the constructor in the

procedure-turned-class with the statement DELETE

OBJECT THIS-OBJECT, which makes the class pretty much

identical in its behavior to a 'traditional' .p (with

no internal entry points) -- but what's the advantage

of expressing it as a class?

Exactly! And destructing yourself during construction is a real bad habit.... I wonder why the feature exists at all.

2) you should use the constructor to initialize a

class instance and leave it in a valid state.

...

Agreed, so why can't things like this remain

procedures?

Thanks, that's what I tried to explain

Posted by Admin on 27-Mar-2007 01:47

But more fundamentally, the debate seems to be over

classes as a programming tool mixed with a relational

view of the data they manage.

I think the real argument here is single point of definition. How can you achieve a substantial level of well defined software aspects being used consistently?

And designating a specific

object such as a Business Entity to be responsible

for all access to a specific set of data couples the

data to the logic, though it remains true that the

data can be extracted as a unit from the class that

manages it.

Sure, you can query the database directly as well. It means you will have to replicate the knowledge of your datamodel in your reports, which has proven to be a complex issue for end users...

And much of the power of the language as

a tool for manipulating data is well worth keeping

and taking full advantage of, which is directly tied

to viewing it as rows and fields within tables and

not just as properties with getters and setters.

Here you have a point: this used to be true! But most 4GL graphical user interfaces don't do a simple "DISPLAY customer" anymore, no, they're using things like "ASSIGN custId:SCREEN-VALUE IN FRAME = customer.custId". And a lot of 4GL business logic code uses the dynamic aspects of the data access language, like creating buffers/queries dynamically, etc. And doing ABL database triggers doesn't seem to be the right way anymore as well. That's why I plead for putting in some productivity enhancements in the ABL language....

Posted by Admin on 27-Mar-2007 06:43

Maybe so, but I think the third case is the one that

most often represents the work to be done in a

business application.

One of the problems I see with the separation of data from logic is the way data requests are handled for instance. Most temp-tables defined in sample applications are a 1:1 copy of the database table. This means that its no problem for clients to pass in a filter string, which will be appended on the server by the datasource components. But there is something fundamentally wrong with this approach: it ties the client code to the database schema. And you can only express a filter when you know which tables are joined in the end.

It would be nice if the OOOERA sample would use real parameters all the way. A string based filter could be allowed when there would be a parser that validates and converts a temp-table WHERE to a database WHERE. The latter might be that straight forward in real applications...

Posted by Thomas Mercer-Hursh on 27-Mar-2007 11:10

Lazy load is a pattern that one can certainly implement

in an entity object, but I wouldn't want to try it in a PDS, I

don't think.

Posted by Thomas Mercer-Hursh on 27-Mar-2007 11:13

Because

some objects are transient. What else would you do if you were

allowed to use a -cls parameter to instantiate a class just like

one can run a procedure like one does with -p. That root object

needs to get constructed, but there is no one around to destruct

it.

Posted by Tim Kuehn on 27-Mar-2007 12:34

Because some objects are transient. What else would

you do if you were allowed to use a -cls parameter to

instantiate a class just like one can run a procedure

like one does with -p. That root object needs to get

constructed, but there is no one around to destruct

it.

Supposedly if the session is shutting down - which leaving the root object implies - then deleting the root object isn't necessarily an issue, is it?

Posted by Thomas Mercer-Hursh on 27-Mar-2007 12:41

But, what shuts the session down? Yes, shutting the session down will take care of the cleanup, but something needs to get to the end or a stop or a quit in order to signal the session to terminate. Seems to me that an object saying, "my work is done and now I'm going away" is a perfectly reasonable way to do that.

Posted by Tim Kuehn on 27-Mar-2007 12:50

How about:

QUIT.

in the object?

DELETE OBJECT THIS-OBJECT.

isn't exactly an intuitive equivalent.

Posted by Thomas Mercer-Hursh on 27-Mar-2007 12:54

Matter of preference, I suppose. I would go with the delete because it wouldn't matter whether it was run by -cls, run by an AppServer call, or newed by some other object ... they would all go off, do some specified work, and then cease to exist and return to whence they came.

Posted by Admin on 27-Mar-2007 13:38

Because some objects are transient. What else would

you do if you were allowed to use a -cls parameter to

instantiate a class just like one can run a procedure

like one does with -p. That root object needs to get

constructed, but there is no one around to destruct

it.

Huh? You want to create a class instance with pure private methods, instantiate the class, let the constructor invoke some private methods and finally let the constructor destruct itself???? So the lifetime of this class instance is the lifetime of constructor? This is definitly not the way you want to design classes

Posted by Thomas Mercer-Hursh on 27-Mar-2007 13:44

In general, no, but what else does one do for the root class from which everything else in the session derives?

Posted by Admin on 27-Mar-2007 13:55

In general, no, but what else does one do for the

root class from which everything else in the session

derives?

Can you explain what you mean by this? What is this root object? If this is somekind of singleton, than you can rely on the ABL runtime to dispose the object when the runtime session ends.

Posted by Thomas Mercer-Hursh on 27-Mar-2007 14:06

Just as one launches some procedure such as a menu system with a -p parameter, one presumably would like someday to start a session with a -cls parameter. The -p typically does some setup and then launches a menu and when the user quits from the menu the procedure returns or "falls through the bottom" and the session is over. For -cls, one needs to do something to handle identification, the menu, etc. and then terminate when the user says they are done (or times out or errors out or whatever).

The runtime would clean up the object as a part of the session terminating, but how does the runtime get to the point of knowing that it is time to terminate the session unless that object exits? Probably, it could simply complete the constructor and the session would still exit, but then if you newed that same object from another object, one would have to remember to explicitly delete it when the constructor returned. I.e., in that code one would delete immediately following the new, which would look at least as peculiar as the delete at the bottom of the constructor.

Posted by Tim Kuehn on 27-Mar-2007 14:17

The runtime would clean up the object as a part of the session terminating, but how does the runtime get to the point of knowing that it is time to terminate the session unless that object exits? Probably, it could simply complete the constructor and the session would still exit, but then if you newed that same object from another object, one would have to remember to explicitly delete it when the constructor returned. I.e., in that code one would delete immediately following the new, which would look at least as peculiar as the delete at the bottom of the constructor.

At least it would be consistent with how other objects have to be handled.

Posted by Mike Ormerod on 27-Mar-2007 15:27

Maybe so, but I think the third case is the one

that most often represents the work to be done in a

business application.

How so? Reporting, maybe, but do you commonly create

10 customers at the same time or enter 10 orders at

the same time? Is FOR EACH really more common than

FIND?

Maybe, maybe not. What about Orderlines, or any Master - Detail construct. The point is that there will be multiple scenarios within an application that need to be dealt with. Ok, what has been discussed and shown may only show one of these scenarios, but nothing has been put forwards to state the contrary either, the samples and examples are simply that, not supposed to be a panacea for all.

But if they have no other perceived value than to start a discussion, they've certainly done that Just make sure we keep the tone positive and inclusive!

Posted by Thomas Mercer-Hursh on 27-Mar-2007 15:42

Master-Detail is my case #2.

And, just as I think there is a strong argument for creating entity objects for case #1, I think this also applies to case 2. Now, that class might have a temp-table of lines internal to it, but it is still nicely encapsultated and has properties for all the global pieces. It is also trivial for it to have a method that delivers order line objects from the values in the temp-table, if one decides against using a temp-table of objects internally for the lines.

One of the points here, though, is that I object to the idea of passing a data set even for the case 3 circumstance. Let it pass a collection object. Then there is no need to pass around data set and temp-table definitions. Internally, the collection object can be a generic one like mine or it could actually be a temp-table or PDS. It is trivial to provide methods that can return rows from that temp-table in the form of entity objects. This approach is cleanly encapsulated and keeps objects relating to each otter as objects.

I have yet to see a positive aspect to the DOH/ROH approach. The idea that it makes marshalling data from the database is spurious.

Posted by Thomas Mercer-Hursh on 27-Mar-2007 15:46

Note that sample code corresponding to the webinar is available here

http://www.psdn.com/library/entry.jspa?entryID=2425

Also three new related whitepapers

http://www.psdn.com/library/entry.jspa?externalID=2422&categoryID=1212

http://www.psdn.com/library/entry.jspa?externalID=2423&categoryID=1212

http://www.psdn.com/library/entry.jspa?entryID=2424

I guess we all have some reading to do...

Posted by Mike Ormerod on 27-Mar-2007 15:54

Note that sample code corresponding to the webinar is

available here

http://www.psdn.com/library/entry.jspa?entryID=2425

Blimey, you guys are quick, we're only just finishing off the postings going live !!

Posted by Thomas Mercer-Hursh on 27-Mar-2007 15:57

Well, you'll notice it took an edit to get the whitepapers in ...

Never think we aren't paying attention!

Posted by Mike Ormerod on 27-Mar-2007 15:58

Well, you'll notice it took an edit to get the

whitepapers in ...

Never think we aren't paying attention!

Likewise

Posted by john on 27-Mar-2007 16:21

One of the problems I see with the separation of data

from logic is the way data requests are handled for

instance. Most temp-tables defined in sample

applications are a 1:1 copy of the database table.

This means that its no problem for clients to pass in

a filter string, which will be appended on the server

by the datasource components. But there is something

fundamentally wrong with this approach: it ties the

client code to the database schema. And you can only

express a filter when you know which tables are

joined in the end.

It would be nice if the OOOERA sample would use real

parameters all the way. A string based filter could

be allowed when there would be a parser that

validates and converts a temp-table WHERE to a

database WHERE. The latter might be that straight

forward in real applications...

Among the many choices, let me respond to at least this message. Yes, samples are typically simplified, but one of the basic principles that we have tried to discuss and emphasize is that the Architecture provides an opportunity that you should take advantage of to restructure data in a way that makes it more appropriate for the logic and UI of the application. One basic way is to denormalize things like coded values to include their meaningful values in the temp-table that is in that case derived from perhaps many database tables. Even the simplest examples in the existing and previous materials show some cases of that. In addition, the materials out on PSDN, among other things, show the mapping process between a filtering request as expressed on the client and what it gets mapped into when it gets back to where the physical data is. If the example shows only a fieldname change, this at least is a placeholder for larger changes that the back-end data access logic would need to be prepared to deal with in a real application.

Posted by Thomas Mercer-Hursh on 27-Mar-2007 17:02

Ah, but if you

engage in this kind of denormalization and name changing, then you

are doing exactly the same kind of thing which one does in an

OO-RDBMS mapping. The difference being that, if you pass a PDS or

temp-table, any changes you make subsequent to the initial

implementation are exposed, while if you wrap the data in an

object, then other code only needs to change if there is an actual

change in contract. A simple example of this is suppose you start

off with a field Customer.Name and build your dataset or object

including that field. Suppose you then decide at some later date

that you don't like fields like Name because it suggests false

joins with other tables that also contain a Name field, so you

decide to change this field's name to Customer.CustomerName. In the

case of passing a PDS or TT, any code anywhere that makes a

reference to ttCustomer.Name, or whatever the PDS/TT version is

called will have to be changed. With the entity object approach,

then that code outside the data access marshalling code will refer

to Customer:Name, a property which remains unchanged regardless of

the under-the-skin change in where the value comes from. This seems

to me to be one of the very essential qualities and virtues of OO.

Let's go back to my three cases and look at them in this regard. In

case 1 there is one primary record and all secondary records are

suitable for denormalization, i.e., we really just have one "thing"

and a bunch of properties. In this case the use of a TT or PDS

seems overkill and the corresponding entity class is simple and

nicely encapsulated. In case 2 there is one primary and a mixture

of secondary records, some of which are suitable for

denormalization into the primary and some of which are sets

associated with the primary. There might also be records secondary

to the secondaries, suitable for denormalization. This case seems

to offer a somewhat stronger argument for DOH/ROH because there is

at least some data structure there, but just for the sake of

argument, let's suppose that we implement the secondary sets as

temp-tables within an entity class. Now the properties of the

primary have all the virtues of case 1, we can treat the TT with

familiar for each and query logic within the class, but it is also

trivial to create a class for the individual secondary record and

convert a record in the internal temp-table to that entity class

when it is necessary to deliver a copy externally. Presto, we have

all of the advantages of case 1 and no need to pass around

definitions. Allowing for a PDS internally, we can also have

tertiary records. In case 3 we have multiple primaries, each

potentially with secondary sets. This case seems to present the

strongest suggestion of the appropriateness of a PDS, but again if

we wrap this in a class, creating in essence an entity specific

collection class, we again can encapsulate all the details of the

implementation and can again deliver individual entity classes as

needed. We can also create set properties like TotalValue, which we

have no place to put in a PDS. We can discuss later whether the

collection class should be generic and whether a temp-table of

individual fields or a temp-table of Progress.Lang.Object is

preferrable, but it seems to me that there is a real strong

argument here for encapsulation within classes instead of passing

around PDS and TT.

Posted by jtownsen on 28-Mar-2007 04:42

A simple example of this is suppose you start off with a field Customer.Name and build your dataset or object including that field. Suppose you then decide at some later date that you don't like fields like Name because it suggests false joins with other tables that also contain a Name field, so you decide to change this field's name to Customer.CustomerName. In the case of passing a PDS or TT, any code anywhere that makes a reference to ttCustomer.Name, or whatever the PDS/TT version is called will have to be changed. With the entity object approach, then that code outside the data access marshalling code will refer to Customer:Name, a property which remains unchanged regardless of the under-the-skin change in where the value comes from. This seems to me to be one of the very essential qualities and virtues of OO.

The following comment is being made without having had a chance to look at the WebEx yet and in respect to an implementation that is not OO-ABL, but I think the concept is the same, so here goes...

I think the question of logical to physical mapping is something that needs to be resolved in the DAO. Changes to the physical DB structure should be completely hidden from anything above the DAO level, regardless of whether this is implemented in OO-ABL or in .p's. The point here is that there is a complete disconnect between the database table structure and naming and that of the PDS & TT's.

In a project I'm currently working on, we have very successfully been able to provide filtering information to the BE based on tables & columns in the PDS/TT (eg. ttCustomer.Name) and remap them to the physical tables (eg. Customer.CustomerName). A couple of ABL features we used to achieve this include ATTACHED-PAIRLIST & DATA-SOURCE-COMPLETE-MAP.

I'd like to be able to post the code used, but unfortunately it belongs to a customer.

Posted by Thomas Mercer-Hursh on 28-Mar-2007 11:21

If dealing only with naming changes, it is true that you can create an unchanging TT/PDS to use in the application, but alter the names in the schema, but not all changes are simple changes in name. Take, for example, a decision to move from a single name field to separate first and last name fields. To support this in the TT, you need to have all three fields, the separate ones and the combined one in order to support the existing logic which is based on the single field. With an entity object, however, the Name property can compose the full name dynamically with the Get and, within the limits of computational accuracy, could decompose an updated full name into its parts. I don't believe that is something one could do in a PDS. Instead, that logic would have to be everywhere there was a potential full name update. Name is perhaps not the best example here because of the difficulty of the algorithm for computationally decomposing it, but there are many cases where a single property can be decomposed into components or have other impacts.

Posted by Thomas Mercer-Hursh on 28-Mar-2007 13:48

OK, provisionally I will go along with that. I.e., this is an object which merely does its work in the constructor, similar to doing its work in the main() and when the object has completed the constructor, then it can be deleted if started from another class or it simply exits because that is the end of the session, same as if a .p had reached the end. If that works, then one could leave out the delete object.

Posted by Thomas Mercer-Hursh on 28-Mar-2007 13:56

Running into this again in the whitepaper, I guess there is a call for action here in needing to implement abstract methods in 10.1C, then.

Posted by Thomas Mercer-Hursh on 28-Mar-2007 14:16

I would like to suggest that when writing a whitepaper such as Introduction to a Class-Based Implementation of the OpenEdge Reference Architecture it would be more appropriate to leave out the parts which are really just a tutorial on OO constructs in ABL. Yes, we need such a tutorial, but it doesn't need to be in every document and it significantly dilutes the substance of what is being discussed in this case. Better would be to have one or more tutorial documents and then to point to them from this document for those who are weak on the concepts.

While a minor point, I also don't get why it is that documents like this get published as Word documents instead of PDFs.

Posted by Mike Ormerod on 28-Mar-2007 14:18

If dealing only with naming changes, it is true that

you can create an unchanging TT/PDS to use in the

application, but alter the names in the schema, but

not all changes are simple changes in name. Take,

for example, a decision to move from a single name

field to separate first and last name fields. To

support this in the TT, you need to have all three

fields, the separate ones and the combined one in

order to support the existing logic which is based on

the single field. With an entity object, however,

the Name property can compose the full name

dynamically with the Get and, within the limits of

computational accuracy, could decompose an updated

full name into its parts. I don't believe that is

something one could do in a PDS.

This could be acheived relatively easily with a PDS. You could use an after-fill event to combine the individual name fields to one 'Name' and have a your own save routine to perform the decomposition, and if you've used a Data Access layer then all this is in one place and doesn't affect the rest of the app one bit

Posted by Thomas Mercer-Hursh on 28-Mar-2007 14:32

Err, wait a moment ... sure I know how one could do this in the process of filling or saving the data set, but what about while it is out there in the business logic layer? E.g., I have 3 BL components which need to use the Customer info, A, B, and C and lets say they get used in that order. All of them start off dealing with the name as a whole, but then we make this change to the data structure in order to implement some new functionality in C. Therefore, we change the DA component and we change C ... but A uses the field as a whole and can update that field. Then along comes C and it has the old data unless the process of updating ... not saving, just updating, .... the field also does the breakdown.

A better example might be that we have an address stored as its component parts, but the application has to start dealing with international differences in how addresses are printed, e.g., most specific to most general in the US and most general to most specific in Japan. So, we add a new property for the composed address and we send it off to these A, B, and C components which are going to do something, but A includes an update of the address components, e.g., an alternate shipping address, and C is supposed to use the internationally correctly composed address to send off some kind of notice or address the package. The logic which puts the fields in sync cannot be in the data access layer.

Posted by Thomas Mercer-Hursh on 28-Mar-2007 14:49

In ICBIOERA, the BaseComponent class implements an interface with two methods defined and then proceeds to define those methods by simply having the method header and end method with a comment in between, i.e., a dummy which has to be overridden in a subclass in order to have any meaning. Why? Putting the methods in an interface signals to me that I want other classes to implement this same interface ... otherwise there is no point in putting it in its own file. But, this class is BaseComponent, from which all component classes will be derived, so putting the methods in as dummies, i.e., pseudo-abstract methods would have been sufficient. Likewise, both the interface and the pseudoabstract methods would be unnecessary in BaseComponent if the interface was implemented lower down in the hierarchy.

Posted by Admin on 29-Mar-2007 01:18

In ICBIOERA, the BaseComponent class implements an

interface with two methods defined and then proceeds

to define those methods by simply having the method

header and end method with a comment in between,

i.e., a dummy

The KISS principle applies here: don't introduce methods for future usage, when there is no future usage yet. The interface iComponent has an initializeComponent and a destroyComponent. For the latter I would rather see a construct like .Net:

- interface IDisposable with Dispose() for releasing resources

Any object can implement this interface when it can free resources.

The "initializeComponent" seems a weird duck as well: when you create a class instance you expect it to be initialized Here I can see another analogy with the .Net ISupportInitialize interface, which has BeginInit() and EndInit() methods. You implement this interface when your class would be more efficient while doing several operations against it between BeginInit and EndInit.

Finally why implement a "destroyObject" when you have a destructor?

Posted by Admin on 29-Mar-2007 01:39

This could be acheived relatively easily with a PDS.

You could use an after-fill event to combine the

individual name fields to one 'Name' and have a your

own save routine to perform the decomposition, and

if you've used a Data Access layer then all this is

in one place and doesn't affect the rest of the app

one bit

The customer name is not a very strong example: I expect that you will have to change your logic all over the place when you add a new feature! The user interface needs some attention when you internationalize addresses, since somewhere you have to select the country for instance. Maybe that's a new input field, required by the business logic.

However my main objection is still there: a ProDataSet is a relational view. So you need to join rows, thus you need join knowledge all over the place. You will have to expose join-colums. Unless you have an artificial primary key in all your tables, the join isn't that straight forward. Defining an efficient artificial key is an excercise on it's own (GUIDs are bad for index distribution for some databases for instance).

Lets imagine the order case: a table with at least 30 related children. Sure you can set up the PDS with all relations defined. But do you need all these relations all the time? Automatic synchronization is nice for databinding, but it's probably a lot of overhead.

Anyway when you go the PDS route all the way, you should cover some issues. A .Net client talking to an AppServer will use the plain DataSet. This DataSet has some annoying issues:

- when you define a primary key, it creates a unique constraint. Creating multiple orders in the TT requires you to assign dummy, but unique order id's. In .Net there is no unknown value trick for the primary key!

- .Net has some problems finding rows when you do a GetChanges() and a Merge() later on. When you add a row, assign the primary key on the server, merge the resulting row back on the client, you will end up with two rows. The .Net DataSet can't map the server row to the client row, since their key values don't match.

So at an abstract level the PDS might seem attractive, but at the implementation level you will run into issues. This can be covered of course, but you have to be aware of the issues.

Posted by Thomas Mercer-Hursh on 29-Mar-2007 11:18

Agreed that there is some confusion here over the role of special methods versus constructors and destructors. I'm not convinced that it is adding value.

This thread is closed