TLSA versus DDD

Posted by Phillip Magnay on 27-Nov-2006 07:55

The conventional wisdom to approaching contemporary service-oriented business applications is through a layered architecture. The reference application demonstrates the use of OpenEdge and PSC products in an implementation of a layered architecture.

With the introduction of object-orientation features, the OpenEdge ABL will provide additional choices concerning layered architecture such as the OERA. Fortunately, OO approaches to implementing a layered service-oriented architecture already exist in other languages such as Java and C#. But there appears that these existing OO approaches have divided into two distinct camps: the Three Layered Services Architecture and the Domain Driven Design . This I found discusses these two distinct approaches. Fowler also touches on these options in his EAA patterns concerning domain logic design.

There are pros and cons to each of these approaches depending on different situations. Also, there is limited scope for overlap and reuse between them, but for the most part they are mutually exclusive. Going one direction doesn't give you any leg-up on the other and indeed, incorporating both approaches together would add an additional level of complexity to the domain logic which would be better to avoid... which is why I suppose there are two distinct camps on this issue.

So I wonder what would be the pros and cons of each of these approaches when it comes using the OpenEdge. I would be interested in hearing everyone's views.

All Replies

Posted by Thomas Mercer-Hursh on 27-Nov-2006 12:06

In some respects, I think this is a false contrast ... perhaps influenced by those extra two letters in front of TLSA and hopefully not characteristic of OERA applications. To me, the central topic here is the central topic in OOAD ... where does one draw the line between one class and another class. Creating lots of little classes provides for lighter weight cost of instantiation and very tightly focused units, as well as a very neat segregation into model layers. Creating larger classes means more expense to instantiate, but makes for a closer approximation of the ideal of having all related data and behavior encapsulated in a single class.

I also don't think that one can really discuss this without simultaneously thinking in terms of services. With services, objects which otherwise might be needed only briefly can be persistent and thus the question of the cost of instantiating them becomes less significant.

Posted by Phillip Magnay on 27-Nov-2006 14:20

In some respects, I think this is a false contrast

... perhaps influenced by those extra two letters in

front of TLSA and hopefully not characteristic of

OERA applications.

I'm not sure why it should be regarded as a false contrast; these are two very different approaches to organizing and implementing domain logic in a layered architecture to achieve the same functional outcome. And each has very different pros and cons during both construction and operation. Fowler makes similar choices/distinctions concerning design approaches to domain logic when covering the "table module" and "domain model" EAA patterns.

To me, the central topic here is

the central topic in OOAD ... where does one draw the

line between one class and another class. Creating

lots of little classes provides for lighter weight

cost of instantiation and very tightly focused units,

as well as a very neat segregation into model layers.

Creating larger classes means more expense to

instantiate, but makes for a closer approximation of

the ideal of having all related data and behavior

encapsulated in a single class.

Sure. But the little light-weight classes as per the domain model also add their own overhead by typically requiring more instances of objects and managing the navigation of these lists/sets of instances. The "larger classes" approach requires fewer instances but can be less flexible/adaptible because it is a coarser-grained object containing embedded data relationships. I agree that it ultimately comes down to where the line is drawn between one class and another. But one approach simulates the data relationships as associations between classes, the other encapsulates those relationships within a single class. There might be some degrees between these approaches but that may imply additional complexity.

I also don't think that one can really discuss this

without simultaneously thinking in terms of services.

With services, objects which otherwise might be

needed only briefly can be persistent and thus the

question of the cost of instantiating them becomes

less significant.

However, a domain model approach means that it is less likely that an object that is left persistent on the server is actually the object that is needed on subsequent requests.

Phil

Posted by Thomas Mercer-Hursh on 27-Nov-2006 14:55

I don't see this contrast as relating to table model versus domain model. The latter contrast is more one of a degenerate, simple version versus the full thing. I don't see the intent of TLSA versus DDD in the same way ... I'm just suggesting that the core of each is good OOAD and that one can have good domain models in a layered architecture. ... and should.

There might be some degrees between these approaches but that may imply additional complexity.

I think there is an infinite range of degrees, which was my attempted point. I remember years ago having an interaction with a certain OO designer who thought that the concept of an inventory item should include its ability to order itself. That sort of thing I regard as a case of extreme lumping, and almost certainly inappropriate lumping because good purchasing involves considering more than one item at a time (we should order 10 of X because the stock is a little low and we need to order Y and Z from the same company and with the X in the order we get a bigger discount).

Start at any given point and one could probably analyze an entire application into a single class, but it wouldn't be a good design. Likewise, every property could be its own class. Between those extremes, there are zillions of different variations. All I am suggesting is that I don't find myself wanting to make different object decompositions because I am thinking in terms of layers.

However, a domain model approach means that it is less likely that an object that is left persistent on the server is actually the object that is needed on subsequent requests.

Why? I would think, for example, that if one had a inventory service and each item was brought into some kind of most recently used cache as it was accessed, that it would mean that most of the commonly used items would end up in that cache, ready for use.

Posted by Phillip Magnay on 27-Nov-2006 15:59

I don't see the intent of TLSA versus DDD in

the same way ... I'm just suggesting that the core of

each is good OOAD and that one can have good domain

models in a layered architecture. ... and should.

==snip==

All I am suggesting is that I don't find myself

wanting to make different object decompositions

because I am thinking in terms of layers.

OK. Now I got you. I agree that the various ways one could make the object decomposition in the domain model can be/should be supported by architectural layering. But there is a distinction that goes beyond levels of object decomposition. A "table module" pattern makes use of one object instance per table/view whereas a "domain model" approach makes use on one object instance per row/rows/data entity.

Why? I would think, for example, that if one had a

inventory service and each item was brought into some

kind of most recently used cache as it was accessed,

that it would mean that most of the commonly used

items would end up in that cache, ready for use.

Sure. If the inventory service accessing inventory data was implemented using a "table module" pattern, then there is no doubt that these object instances would be needed for most every request and therefore persisting these object instances between requests makes absolute sense.

However, if the inventory service accessing inventory data was implemented using a "domain model" pattern, then there is less certainty that the objects instantiated for one request will be used in a subsequent request. They might be. They might not be. That's all. A cache of commonly used items is fine idea as long as some items are indeed commonly used... which in some use cases they might be. In others, not at all.

Phil

Posted by Thomas Mercer-Hursh on 27-Nov-2006 16:55

A "table module" pattern makes use of one object instance per table/view whereas a "domain model" approach makes use on one object instance per row/rows/data entity.

Understood and I think that the table model approach is simply not appropriate for modern OO applications. It is one of the things which I think is "dated" or weak about AutoEdge is that there is a very simple "punch through" on a table by table basis. This enables very deep shared code, but at the expense of having to put de-normalization code too close to the business layer. It is interesting, but not good OO.

I don't see why a table-model implementation would imply any different caching scheme than a domain model. Caching is a design decision that one would make in order to have objects marshalled and ready for use when they were likely to be reused again fairly soon. Different levels and types of caching are appropriate for different types of objects. An object that contains a list of valid state codes, for example, is not expected to change and so should absolutely be cached, probably in every service that might use it. This would be particularly attractive if one could register for a list-changed event that would trigger refreshing the object. An order object might not get cached at all or be cached only during the time in which it is being actively processed since beyond that time it will be rare that it is accessed.

Posted by Phillip Magnay on 28-Nov-2006 08:29

Understood and I think that the table model approach

is simply not appropriate for modern OO applications.

It is one of the things which I think is "dated" or

weak about AutoEdge is that there is a very simple

"punch through" on a table by table basis. This

enables very deep shared code, but at the expense of

having to put de-normalization code too close to the

business layer. It is interesting, but not good

OO.

It may not be good OO. But good OO was not amongst the goals of the current version of AutoEdge. That will come in due course.

At a layered architecture level, there appears to be less and less argument. However, there are different views at a design level. If good OO is an important goal, then a domain model approach is the option. If good OO is further down the list of priorities, the table module pattern is a perfectly valid alternative. And although it is certainly closer to a table module approach than a domain model, the AutoEdge approach is also a solid option.

I don't see why a table-model implementation would

imply any different caching scheme than a domain

model. Caching is a design decision that one would

make in order to have objects marshalled and ready

for use when they were likely to be reused again

fairly soon. Different levels and types of caching

are appropriate for different types of objects. An

object that contains a list of valid state codes, for

example, is not expected to change and so should

absolutely be cached, probably in every service that

might use it. This would be particularly attractive

if one could register for a list-changed event that

would trigger refreshing the object. An order object

might not get cached at all or be cached only during

the time in which it is being actively processed

since beyond that time it will be rare that it is

accessed.

Don't get me wrong - I'm a proponent of caching. But I just want to separate the issue of the desirability and/or appropriateness of caching and the difference in the caching mechanism between domain model and table module approaches for a moment.

I do see a distinction between caching for domain model patterns versus table module approach. In a domain model, you are caching real objects which contain both state and behavior, and this is great if there is a reasonable expectation that the objects in the cache are actually going to be used repeatedly in a multi-user, stateless server request environment. Reference codes like US states are a good candidate for caching. Master tables like inventory or customer might not be as good (but depending on the application/use case they might be just as good).

However, in a table module approach, it is more about caching behavior (ie, persisting components for reuse without reloading) than caching state. You certainly could also cache data inside these components but then again you may decide not to. With a domain model, you do not have that choice; a domain model cache will always include state and behavior. Again, the decision to cache state should be based on the reasonable expectation that this data is going to be needed by subsequent requests. Similarly, the decision to cache components should be based on the reasonable expectation that the behavior of such components is going to be needed by subsequent requests.

And although the ultimate implementation of caching mechanism for a domain model may not be too different from a caching mechanism for a table module approach (then again they may be quite different), there is a material conceptual distinction here.

Phil

Posted by Mike Ormerod on 28-Nov-2006 10:34

I didn't want to intrude your on-going discussion, however...

Understood and I think that the table model

approach

is simply not appropriate for modern OO

applications.

It is one of the things which I think is "dated"

or

weak about AutoEdge is that there is a very simple

"punch through" on a table by table basis. This

enables very deep shared code, but at the expense

of

having to put de-normalization code too close to

the

business layer. It is interesting, but not good

OO.

It may not be good OO. But good OO was not amongst

the goals of the current version of AutoEdge. That

will come in due course.

As Phil rightly says, OO was not a consideration for the current version of AutoEdge, but we do have some OO plans going forwards.

But on the more general topic of your discussion, I think you also have to consider the target implementation in some of your design. Sure, you can be as abstract as you like, but at then end of the day, the whole point is to turn the design into code. At that point I think you have to then look at the strengths of the chosen language of implementation. Although a pure Domain Model can be coded in ABL, does that make the best use of ABL as a language, which has the distinction from pure OO languages of being Data-Aware? What would be the added value of having orderlines as a collection of objects, as opposed to a dataset, for which the language has built in features & syntax? Or am I totally missing the point? (Always possible as it's late in the day here!!)

Posted by Thomas Mercer-Hursh on 28-Nov-2006 11:06

It may not be good OO. But good OO was not amongst the goals of the current version of AutoEdge. That will come in due course.

Well, whether the code utilizes classes or not, I raised a question here http://www.psdn.com/library/thread.jspa?threadID=2608&tstart=0 about whether the structure is ideal. I don't think that the preferred encapsulation changes just because one is not using OO verbs. The AutoEdge structure does not provide isolation between the database structure and the structure in the application and it does the de-normalization at what I think is a late point.

If good OO is an important goal, then a domain model approach is the option. If good OO is further down the list of priorities, the table module pattern is a perfectly valid alternative.

"Valid" is an interesting word. What makes something valid or not valid ... other than that it is possible to make it work? If so, lots of architectural alternatives that have been used historically are "valid" because they succeeded as the basic of a working application. I don't think that equates with "equally desirable" or "modern best practice". AutoEdge is a very cool piece of work, but I think that there are pieces of it that are tied to historical practice and which could use a bit of buffing.

I'm a proponent of caching. But I just want to separate the issue of the desirability and/or appropriateness of caching and the difference in the caching mechanism between domain model and table module approaches for a moment.

One is a temp-table of rows resembling the rows in the data base and the other is a temp-table of Progress.Lang.Object. What else?

If it is more expensive to marshal and de-marshal an object because the object includes behavior, that actually increases the performance advantage of caching it, as long as there is a reasonable expectation of re-use. To be sure, that expectation varies from near certainty to near improbability, but that is just a part of the design process ... the same design decision with either model.

Posted by Thomas Mercer-Hursh on 28-Nov-2006 11:42

I didn't want to intrude your on-going discussion, however...

Much better to have more than just Phil and I!

What would be the added value of having orderlines as a collection of objects, as opposed to a dataset, for which the language has built in features & syntax?

Certainly one of the interesting design questions in OO comes when dealing with this kind of parent-child relationship. Moreover, I am beginning to suspect that the OO context of OOABL is different than the context of OO3GL languages because ABL has structures like temp-tables and PDS which are not present in those other languages. In some ways, a temp-table is a sort of ultra-lightweight object with very limited behavior and a PDS is a sort of semi-lightweight object with a limited range of formalized behaviors.

So, I think that the question is, how much behavior does the child have and in what contexts. Order lines, for example, seem to have a lot of behavior, but I think that one could argue that the bulk of that behavior is actually associated with the order, not with the order line independent of the order. In a non-OO program one might well do a For Each OrderLine, but chances are there will be a join on Order in there somewhere because almost anything that one does to an order line also impacts the order.

Posted by Phillip Magnay on 28-Nov-2006 11:50

Well, whether the code utilizes classes or not, I

raised a question here

http://www.psdn.com/library/thread.jspa?threadID=2608&

tstart=0 about whether the structure is ideal. I

don't think that the preferred encapsulation changes

just because one is not using OO verbs. The AutoEdge

structure does not provide isolation between the

database structure and the structure in the

application and it does the de-normalization at what

I think is a late point.

I'm sure the AutoEdge guys will look seriously at your feedback.

If good OO is an important goal, then a domain

model approach is the option. If good OO is further

down the list of priorities, the table module pattern

is a perfectly valid alternative.

"Valid" is an interesting word. What makes something

valid or not valid ... other than that it is possible

to make it work? If so, lots of architectural

alternatives that have been used historically are

"valid" because they succeeded as the basic of a

working application. I don't think that equates with

"equally desirable" or "modern best practice".

AutoEdge is a very cool piece of work, but I think

that there are pieces of it that are tied to

historical practice and which could use a bit of

buffing.

I was referring to the "table module" pattern - not AutoEdge. The table module pattern (Fowler - Patterns in EAA) is a perfectly valid and modern design choice.

One is a temp-table of rows resembling the rows in

the data base and the other is a temp-table of

Progress.Lang.Object. What else?

If it is more expensive to marshal and de-marshal an

object because the object includes behavior, that

actually increases the performance advantage of

caching it, as long as there is a reasonable

expectation of re-use. To be sure, that expectation

varies from near certainty to near improbability, but

that is just a part of the design process ... the

same design decision with either model.

As I previously indicated, there may not be too great a difference between the implementation of a caching mechanism for objects from a domain model versus that for business components which follow a table module approach. Afterall, it's just a list of references to persistent processes. But as per the original point, there is a clear distinction between caching objects (data & behavior, lots of instances of small fine-grained classes) in a domain model and caching business components (behavior only, fewer instances of large course-grained classes) in a table module approach. And this distinction between these two design approaches means there are two distinct set of probabilities that an object will be used again on subsequent requests after instantiation for each respective approach, and therefore there are two distinct considerations in the design process.

Phil

Posted by Tim Kuehn on 28-Nov-2006 11:51

Certainly one of the interesting design questions in

OO comes when dealing with this kind of parent-child

relationship. Moreover, I am beginning to suspect

that the OO context of OOABL is different than the

context of OO3GL languages because ABL has structures

like temp-tables and PDS which are not present in

those other languages. In some ways, a temp-table is

a sort of ultra-lightweight object with very limited

behavior and a PDS is a sort of semi-lightweight

object with a limited range of formalized behaviors.

Personally, I think the notion of "collections", and other constructs from other languages can be quite nicely supplanted / implemented using ABL's temp-table support.

As such, are "base" classes of collections and such really needed in OOABL?

Given the presence of TTs and other language constructs, are such t

Posted by Phillip Magnay on 28-Nov-2006 11:57

But on the more general topic of your discussion, I

think you also have to consider the target

implementation in some of your design. Sure, you can

be as abstract as you like, but at then end of the

day, the whole point is to turn the design into code.

Sure. The intention was to tease out pros and cons of two very distinct different design options including the implementation issues.

At that point I think you have to then look at the

strengths of the chosen language of implementation.

Although a pure Domain Model can be coded in ABL,

does that make the best use of ABL as a language,

which has the distinction from pure OO languages of

being Data-Aware? What would be the added value of

having orderlines as a collection of objects, as

opposed to a dataset, for which the language has

built in features & syntax? Or am I totally missing

the point? (Always possible as it's late in the day

here!!)

Not that I'm trying to land on one side of the discussion or another, but the domain model approach does have its own pros including being more closely aligned with OO design best practices as well as being more flexible/more adaptible over time.

Phil

Posted by Thomas Mercer-Hursh on 28-Nov-2006 12:19

I'm sure the AutoEdge guys will look seriously at your feedback.

I hope so. I think there is a lot in AutoEdge to stimulate discussion. The end of the discussion might be a conclusion that one would do it differently if one had to do it over again, but I don't think that needs to reflect poorly on AutoEdge at all. In fact, the point is that AutoEdge is good enough and interesting enough to be worth discussing ... something not true of sports2000, for example.

I was referring to the "table module" pattern - not AutoEdge. The table module pattern (Fowler - Patterns in EAA) is a perfectly valid and modern design choice.

Fowler covers a number of patterns which are in some sense competitive and I think he is pretty clear about making the distinction of when to use the more degenerate forms. Table model is appropriate when the connection is over the wire since one currently can't transmit an actual object ... and probably wouldn't want to anyway (Although the "table" would probably be in the form of XML). I don't think that this means in any way that it is a simple toss up which one to use. E.g., among other careful qualifications and value statements he says "for handling complicated domain logic, a Domain Model is a better choice." That seems pretty unequivocal to me.

But as per the original point, there is a clear distinction between caching objects (data & behavior, lots of instances of small fine-grained classes) in a domain model and caching business components (behavior only, fewer instances of large course-grained classes) in a table module approach.

Why are you equating domain model with small and fine-grained and table model with large and coarse grained? If anything, an Order object is going to be very large and complex compared to an Order table, an OrderLine table, etc.

And this distinction between these two design approaches means there are two distinct set of probabilities that an object will be used again on subsequent requests after instantiation for each respective approach, and therefore there are two distinct considerations in the design process.

Maybe I am reading this wrong, but it seems to me that the probability of re-use is a property of the context, regardless of the model used for implementation.

Posted by Thomas Mercer-Hursh on 28-Nov-2006 12:23

Personally, I think the notion of "collections", and other constructs from other languages can be quite nicely supplanted / implemented using ABL's temp-table support.

My vote is for "implemented", as expressed here http://www.oehive.org/CollectionClasses

As such, are "base" classes of collections and such really needed in OOABL?

Provisionally, I think they provide value, if only because they keep you from having to define a separate collection class for every domain class.

Given the presence of TTs and other language constructs, are such t

Press "Post" prematurely?

Posted by Tim Kuehn on 28-Nov-2006 12:29

Given the presence of TTs and other language

constructs, are such t

Press "Post" prematurely?

I didn't see that on the message when I posted it. Ah well.

Posted by Phillip Magnay on 28-Nov-2006 12:47

Fowler covers a number of patterns which are in some

sense competitive and I think he is pretty clear

about making the distinction of when to use the more

degenerate forms. Table model is appropriate when

the connection is over the wire since one currently

can't transmit an actual object ... and probably

wouldn't want to anyway (Although the "table" would

probably be in the form of XML).

I'm not sure if we're talking about the same thing. I'm having trouble figuring out what you mean by table design pattern in the Domain Logic Patterns section of Fowler's EAA.

I don't think that

this means in any way that it is a simple toss up

which one to use. E.g., among other careful

qualifications and value statements he says "for

handling complicated domain logic, a Domain Model is

a better choice." That seems pretty unequivocal to

me.

I'm sure I didn't suggest anything about being a toss-up. And I don't read Fowler as being unequivocal on this choice. He carefully outlines the pros and cons of several domain logic approaches in context of various situational factors. He also states, "The main reason to choose framework, ie ProDatSets, which lends weight to a Table Module approach to domain logic.

Why are you equating domain model with small and

fine-grained and table model with large and coarse

grained? If anything, an Order object is going to be

very large and complex compared to an Order table, an

OrderLine table, etc.

Again, I don't think we're on the same page. Take another look at the Table Module pattern in Fowler's Patterns in EAA.

And this distinction between these two design

approaches means there are two distinct set of

probabilities that an object will be used again on

subsequent requests after instantiation for each

respective approach, and therefore there are two

distinct considerations in the design process.

Maybe I am reading this wrong, but it seems to me

that the probability of re-use is a property of the

context, regardless of the model used for

implementation.

Context always plays its part. But the Domain Model pattern and the Table Module pattern are distinctly different design/implementation approaches which would have distinctly different probabilities wrt reuse on subsequent requests after instantiation.

Phil

Posted by Thomas Mercer-Hursh on 28-Nov-2006 13:58

I'm not sure if we're talking about the same thing. I'm having trouble figuring out what you mean by table model when I've been consistently referring to the table module design pattern in the Domain Logic Patterns section of Fowler's EAA.

We are talking about the same thing ... I just have my fingers fumbled.

These two patterns fit together as if it were a match made in heaven."

Which is exactly why I said that one would go with table module for data coming over the wire ... at least to a point. Note, however, the sample code I posted to the beta forum and how it articulates a domain model object with XML data coming over the wire. This domain model object may or may not be the same class as was on the sending end. E.g., the receiving class might be a read-only class because the consuming context had no power to override existing values.

The OpenEdge ABL includes a good Record Set framework, ie ProDatSets, which lends weight to a Table Module approach to domain logic.

I am still exploring this. PDS have a lot of cool features, but I have some questions about whether the current availability of classes overrides those features because PDS were a sort of standardized proto-object and we might not want to be restricted in that way when doing a full-fledged OO implementation. For example,

1) the callbacks have to be defined as public, which is breaking encapsulation;

2) there is no built-in method to create a table of objects (can be done with AFTER-ROW-FILL, but hand coded); and

3) the result of a PDS fill, even with the AFTER-ROW-FILL override is a temp-table of Progress.Lang.Object, not a collection object, so one sortof has to do it all over again.

I haven't made up my mind on this point yet. At some times I lean toward them and at others I'm not sure.

Take another look at the Table Module pattern in Fowler's Patterns in EAA.

I have ... might be a difference in interpretation ... might be a communication issue ... could possibly be a case where I don't agree with Fowler (happens occasionally), but I don't think I am getting the same message that I think you are getting. Can you point to a page?

Context always plays its part. But the Domain Model pattern and the Table Module pattern are distinctly different design/implementation approaches which would have distinctly different probabilities wrt reuse on subsequent requests after instantiation.

I don't see how one can say anything about probability of re-use without context and, given a context, I don't see how the implementation impacts that re-use. If I have an Order or an Item, I am either likely to want it again or not.

Posted by Phillip Magnay on 28-Nov-2006 15:08

Take another look at the Table Module pattern in

Fowler's Patterns in EAA.

I have ... might be a difference in interpretation

... might be a communication issue ... could possibly

be a case where I don't agree with Fowler (happens

occasionally), but I don't think I am getting the

same message that I think you are getting. Can you

point to a page?

Pages 96-97 covers the pros and cons between the options. Then 117-124 for the Domain Model and 125-132 for the Table Module.

Context always plays its part. But the Domain

Model pattern and the Table Module pattern are

distinctly different design/implementation approaches

which would have distinctly different probabilities

wrt reuse on subsequent requests after

instantiation.

I don't see how one can say anything about

probability of re-use without context and, given a

context, I don't see how the implementation impacts

that re-use. If I have an Order or an Item, I am

either likely to want it again or not.

Certainly I can make reasonable judgement if the context is a commonly applicable use case and applied equally for each design/implementation choice.

I'll go back to your inventory service example, let's say a common use case such as an inventory item update operation.

Domain model scenario - upon an inventory update request to the server, a domain object for an inventory item is instantiated (after first checking cache) via a factory method through the data access layer for a given inventory item id. One or more setters update the object's attributes, then an update method is called to validate and commit these changes to the database via the data access layer. The domain object for the inventory item for a given inventory item id remains instantiated (persists in memory) and a reference to this domain object instance is registered with the cache. A subsequent request to the server for an inventory item update but with a different inventory id than the previous request first checks the cache to see if a domain object instance has already been instantiated for that inventory id, doesn't find such a domain object instance and so instantiates a new domain object instance. Sets attributes, validate and updates to the database. All subsequent requests to the server to update inventory items either find an existing domain object instance in cache or do not depending on the value of the requested inventory item id. Those requests that do not find an existing domain object intance in cache instantiate a new domain object instance to use and add to cache.

Table Module scenario - upon an inventory update request to the server, a table module object for inventory items is instantiated (after first checking cache) and passed the given inventory id to retrieve through the data access layer. One or more setters update the object's data, then an update method is called to validate and commit these changes to the database via the data access layer. The table module object for inventory items remains instantiated (persists in memory) and a reference to it is registered with the cache. A subsequent request to the server to update another inventory item but with a different inventory id than the previous request first checks the cache for an instance of a table module object for inventory items, it finds the existing instance of the table module for inventory items created on the previous request, and retrieves the data for the given inventory item through the data access layer. Sets attributes, validates and updates to the database. All subsequent requests to the server to update an inventory item are certain to find the existing inistance of the table module object for inventory items regardless of the requested inventory id.

In the first scenario, the probability of finding an existing domain object instance for a given inventory id in cache upon ensuing server requests is less than the second scenario where is there is a certainty of finding an existing table module instance for all inventory items in cache upon ensuing server requests.

Phil

Posted by Thomas Mercer-Hursh on 28-Nov-2006 15:30

Pages 96-97 covers the pros and cons between the options.

So, despite Mr. Fowler's well known balanced presentation, he says, among other things:

  • "High on that list is the difficulty of learning to use a Domain Model"

I.e., OO is hard to learn ... but that certainly isn't a negative about whether it is better.

  • "a Domain Model requires skill if it is to be done well"

OK, fine, I accept that any complex architecture must be done well, but our context here is ABL, a language created for enterprise class applications, not baby games.

  • "The second big difficulty of a Domain Model is its connection to a relational database"

Again, we are doing this in the context of ABL, where we are very good at dealing with RDBMS. There are a couple of rough spots, like direct-to-object PDS, but basically this problem area is one we should be very good at in ABL.

  • "while it can't touch a real Domain Model on handling complex domain logic..."

Isn't that what I've been saying?

We aren't talking about pet store or sports2000 applications here. ABL is used for extremely hairy, complex applications. Ergo, we should be selecting a Domain Model because it is the best at handling this complex logic.

upon an inventory update request to the server, a table module object for inventory items is instantiated

Ah, I see where our problem lies. It isn't reasonable to instantiate a table module object for all items. There can be millions of items in the database, only some of which are current and amongst the current ones some are rapid movers and some might sell once a year. In any practical enterprise system one would have to implement a partial table, adding new entries to it with exactly the same logic as for the domain model. That's why I don't see any difference.

Posted by Phillip Magnay on 28-Nov-2006 15:57

  • "while it can't touch a real

Domain Model on handling complex domain

logic..."

Isn't that what I've been saying?

Let me complete that quote,"...it fits really well with a relational database--and many other things too." "... the Table Module works nicely by playing to the strengths of the relational database and yet representing a reasonable factoring of the domain logic."

We aren't talking about pet store or sports2000

applications here. ABL is used for extremely hairy,

complex applications. Ergo, we should be selecting a

Domain Model because it is the best at handling this

complex logic.

Assuming your situation involves extremely complex logic, then the Domain Model may be the right option. But there are some cons as well as pros to this decision.

upon an inventory update request to the server, a

table module object for inventory items is

instantiated

Ah, I see where our problem lies. It isn't

reasonable to instantiate a table module object for

all items. There can be millions of items in the

database, only some of which are current and amongst

the current ones some are rapid movers and some might

sell once a year. In any practical enterprise system

one would have to implement a partial table, adding

new entries to it with exactly the same logic as for

the domain model. That's why I don't see any

difference.

A Table Module pattern does not load up all items from inventory at once. It would only load up needed inventory items per server request. If you don't see the difference, then that's OK.

Phil

Posted by Thomas Mercer-Hursh on 28-Nov-2006 16:16

Like I said, Fowler is always really balanced in his presentation and gives everything its due. Why else do you think he wrote the chapters for all of the other integration styles except messaging in Enterprise Integration Patterns? Obviously the authors were convinced that Messaging was the only thing of real interest, so someone had to step in for the sake of completeness to present the options which were not going to be discussed further.

While Fowler gives all alternatives their fair due here, even Transaction Scripting, I think it doesn't take any reading between the lines to get the message that Domain Model is the one thing to be using for complex applications. And, ABL wasn't created for whipping off little simple applications with half a dozen minimally populated columns. So, OK, let's be fair and mention alternatives, but let's not pretend that they are equal or that it is something of a toss up which one we use.

That isn't the topic I thought you started this thread on though.

Assuming your situation involves extremely complex logic, then the Domain Model may be the right option.

And how many real world, not demo, ABL applications do you think there are which do not involve extremely complex logic?

But there are some cons as well as pros to this decision.

To be sure and that is one of the things that people like Fowler help us to remember.

A Table Module pattern does not load up all items from inventory at once. It would only load up needed inventory items per server request. If you don't see the difference, then that's OK.

OK, I thought I might be on to the difference, but I see that I got that wrong. But, if both processes can be characterized by:

1) Check cache to see if already there;

2) If not, fetch appropriate instance;

3) Do update;

4) Leave in cache for the next operation.

How is one going to be more likely to find the instance in the cache than the other?

Posted by Phillip Magnay on 28-Nov-2006 20:57

While Fowler gives all alternatives their fair due

here, even Transaction Scripting, I think it doesn't

take any reading between the lines to get the message

that Domain Model is the one thing to be using for

complex applications. And, ABL wasn't created for

whipping off little simple applications with half a

dozen minimally populated columns. So, OK, let's be

fair and mention alternatives, but let's not pretend

that they are equal or that it is something of a toss

up which one we use.

I disagree. You've come to a particular conclusion based on your views and priorities. I am not coming to any conclusion. Each approach has it's respective pros and cons which may make one approach more suitable in some contexts, and the other method more appropriate in other situations. Again, I have not and do not suggest it is a toss up. It's about determining the most suitable approach for the purpose. If you disagree then we will have to agree to disagree and move on.

That isn't the topic I thought you started this

thread on though.

My intention was not to pick a winner. I wanted to discuss the respective pros and cons of these distinctly different approaches in different situations.

OK, I thought I might be on to the difference, but I

see that I got that wrong. But, if both processes

can be characterized by:

1) Check cache to see if already there;

2) If not, fetch appropriate instance;

3) Do update;

4) Leave in cache for the next operation.

How is one going to be more likely to find the

instance in the cache than the other?

There is a very detailed explanation of the two approaches in my post above; an explanation which highlighted the distinctions wrt caching and how those distinctions affect the probability of locating an object instance in cache on subsequent server requests. I don't have anything to add to that explanation.

But I didn't want to get snagged. If you don't see these distinctions, then let's move on.

Phil

Posted by Thomas Mercer-Hursh on 29-Nov-2006 11:33

I wanted to discuss the respective pros and cons of these distinctly different approaches in different situations.

My point was that this started as a discussion about TLSA vs DDD, but has turned into one about Table Module vs Domain Model. I certainly wouldn't equate TLSA with Table Module, would you?

In particular, I see no reason not to pass domain objects between data access and business layers.

I don't have anything to add to that explanation.

When people are not understanding each other, I find it useful for one of them to try to restate what they think the other one has said so that one can find out whether what was heard was what was said. It became clear earlier that I didn't hear you correctly at one point because I incorrectly got the idea that you might be indicating the the whole table was instantiated. I focused on that because it would provide an explanation for what was different.

Can you tell me whether this characterization is correct for your understanding of Table Module caching?

> 1) Check cache to see if already there;

2) If not, fetch appropriate instance;

3) Do update;

4) Leave in cache for the next operation.

Posted by Admin on 30-Nov-2006 12:24

Hi there,

What I don't understand from PSC is that the 4GL is already doing this at the row (database) level:

Can you tell me whether this characterization is

correct for your understanding of Table Module

caching?

> 1) Check cache to see if already there;

2) If not, fetch appropriate instance;

3) Do update;

4) Leave in cache for the next operation.

Why can't they add a layer on top of it and return 4GL objects (class instances) instead of temp-tables...

Posted by Thomas Mercer-Hursh on 30-Nov-2006 12:40

Why can't they add a layer on top of it and return 4GL objects (class instances) instead of temp-tables...

I presume that you are referring to the -B cache?

As useful as that cache is, I think it is somewhat different from the kinds of caches that we are talking about here, which are more explicitly managed. The principles are similar, but any given cache might be managed in different ways according to the context. E.g., a cache of state codes would be initialized with all values at startup and would only be refreshed if there was some change to the stored data. A cache of items or customers would probably be managed on a most-recently used basis with objects aging out of the cache on some time limit. The time limit for items might be a day or more while that for customers might be shorter.

I don't see how PSC could automatically utilize the -B cache for this purpose since the object related to the tables could be quite different from case to case, even with the same schema.

Posted by Admin on 30-Nov-2006 12:54

Why can't they add a layer on top of it and return

4GL objects (class instances) instead of

temp-tables...

I presume that you are referring to the -B cache?

No, I'm not....

As useful as that cache is, I think it is somewhat

different from the kinds of caches that we are

talking about here, which are more explicitly

managed. The principles are similar, but any given

cache might be managed in different ways according to

the context. E.g., a cache of state codes would be

initialized with all values at startup and would only

be refreshed if there was some change to the stored

data. A cache of items or customers would probably

be managed on a most-recently used basis with objects

aging out of the cache on some time limit. The time

limit for items might be a day or more while that for

customers might be shorter.

I don't see how PSC could automatically utilize the

-B cache for this purpose since the object related to

the tables could be quite different from case to

case, even with the same schema.

Like I said, I was not referring to the -B. I was referring to the buffer-framework the runtime is using when querying/updating the database. This infrastructure could also be used by an object relational mapping layer. Let's assume your favorite domain model: why fetch things in buffers first and than translate them into objects. Can't the 4GL hydrate class instances instead?

Posted by Thomas Mercer-Hursh on 30-Nov-2006 13:17

Like I said, I was not referring to the -B. I was referring to the buffer-framework the runtime is using when querying/updating the database. This infrastructure could also be used by an object relational mapping layer. Let's assume your favorite domain model: why fetch things in buffers first and than translate them into objects. Can't the 4GL hydrate class instances instead?

Well, I'm a little unclear here. What caching are you talking about then? Or are you not talking about caching at all?

As for the automatic marshalling to a class in a ProDataSet, this certainly would be nice, but it isn't all all clear how one would go about it. Pre-10.1B it would seem that one would have to either have public data members (yuck!) or some way to define accessor methods which map to the columns in the table(s). Even with properties, one would at the least have to provide this mapping. And, what about the case ... probably very typical case ... where the class has properties which come from multiple tables? When you glance back at the example I gave above of code for an AFTER-ROW-FILL, it really isn't a whole lot longer than a list mapping a database column to a property. Once we have real properties one would no longer need a SetAll method, but the setting could be very compact. With method overloading so that we can do the SetAll in the constructor, then it is more compact yet. Given that this is totally open and could be used to create one class for 10 or more tables and yet still be very compact, barely more than the number of lines required to enumerate the column to property correspondence, it doesn't actually seem that awful to me.

Posted by Admin on 01-Dec-2006 02:01

Well, I'm a little unclear here. What caching are

you talking about then? Or are you not talking about

caching at all?

When you want to abstract the physical database structure as well as the physical location from your domain model, you require some data access layer, right? The Fowler patterns give you several ways of creating an object relational mapping (ORM), making it (theoreticaly) possible to change backends, physical table structures, etc. It at least makes sure that database access is concentrated in one layer.

When you step into the ORM-arena, you will soon have to manage a lot of things like (for inspiration, see http://www.hibernate.org/5.html):

- returning unique instances

(session context tracks rehydrated objects)

- managing the transaction context

Invalidate the session context for instance when you write to the database, since 4GL-triggers might invalidate the class instances, or objects, in memory)

- define a query-mechanism

Use a strong API, where every method maps to a query or use some object query language (OQL)

In my opinion this is already handled by the 4GL-runtime when you think in terms of BUFFERS. So why can't this be extended to broker/manage class instances? And you do need a declarative mapping mechanism, that's right! This would make the 4GL one of the few programming languages with transactional support on objects. It will take a while before .Net (C#) is able to replicate this behavior, since it's at the research stage at Microsoft "C# Software Transactional Memory"

(http://research.microsoft.com/research/downloads/Details/6cfc842d-1c16-4739-afaf-edb35f544384/Details.aspx?CategoryID=).

And Microsoft will introduce a native "entity framework" in Visual Studio 2007, which is basically what I'm talking about here (see http://blogs.msdn.com/adonet/archive/2006/07/11/662447.aspx or search for LINQ).

As for the automatic marshalling to a class in a

ProDataSet, this certainly would be nice, but it

isn't all all clear how one would go about it.

I would like to skip this step when loading data from the database via the "entity manager".

Posted by Thomas Mercer-Hursh on 01-Dec-2006 11:09

To me, this isn't really something that I am looking for as a change in the language because I question whether it can be significantly more compact and, more significantly, that it would provide less control. It is possible that one could have some kind of mapping in the same fashion as properties where there was a default behavior and the potential to override, but even properties have their limitations. Suppose, for example, that the database has two sets of coordinate points and one wants the object to contain a distance property?

Instead, I think the need here is to work on MDA transforms since then one can work with the graphical UML models to define both the domain object and the database representation and have the data access objects automatically generated. Of all the MDA tasks facing us, this has to be the easiest.

Posted by Thomas Mercer-Hursh on 01-Dec-2006 11:11

To me, this isn't really something that I am looking for as a change in the language because I question whether it can be significantly more compact and, more significantly, that it would provide less control. It is possible that one could have some kind of mapping in the same fashion as properties where there was a default behavior and the potential to override, but even properties have their limitations. Suppose, for example, that the database has two sets of coordinate points and one wants the object to contain a distance property?

Instead, I think the need here is to work on MDA transforms since then one can work with the graphical UML models to define both the domain object and the database representation and have the data access objects automatically generated. Of all the MDA tasks facing us, this has to be the easiest.

Without having dug into it deeply, I suspect this is actually closer to what they are talking about in the material you posted, i.e., it is a development of the IDE more than the underlying language.

Posted by Jvanbouchaute on 12-Jan-2007 08:06

The "Table module" approach (as Fowler names it) can be perfectly supported by OE10.1B due to the prodataset object. I have seen already some frameworks that implemented such a model.

I am not sure if a Rich Domain Model (including the almost obligatory O/R mapping component) can be achieved easily with the current 10.1B OO ABL implementation ?

Do the people at Progress consider this model to be fully supported by OE OO-ABL ?

Or was the "Table Module" approach the target ...

Introduce OO concept in a language is one thing, being able to support a complex OO model for a complex business domain is another ...

Fe. How effeciently will the PVM will manage a large number of instantiated domain objects and the rate of instantiation/destruction of those compared to a JVM or .NET ?

For the moment, I have little visibility on that direction ...

So is the Domain model option with OE - OOABL a valid one ?

Kind regards

Jurgen Van Bouchaute.

Posted by Thomas Mercer-Hursh on 12-Jan-2007 12:02

At this point I see no

reason why not.

This thread is closed