New and Revised OERA Definitions

Posted by Mike Ormerod on 22-Jun-2007 14:31

Hi

As a follow up to my exchange presentation, I just posted some new or revised definitions for each OERA component. Some are brand new definitions for things we've never actually put a paper out before, such as Service Adapter, Business Task, Business Workflow etc. Some are revised, such as Business Entity, Data Access Object, Data Source Object etc.

These definitions reflect our latest thoughts and refinements on the OERA and it's components. Many of the changes to the revised papers are clarifications, as well as some subtle yet potentially interesting shifts.

As always I'd welcome your feedback and comments. Maybe we take a paper at a time and have a healthy discussion

Mike

All Replies

Posted by Thomas Mercer-Hursh on 22-Jun-2007 14:42

Did you mean to post a link?

Posted by Mike Ormerod on 22-Jun-2007 15:18

Yes, good catch, although I figured everyone has the OERA section so well bookmarked !!!

For all the latest OERA stuff look here:

http://www.psdn.com/library/kbcategory.jspa?categoryID=54

Thanks

Posted by Thomas Mercer-Hursh on 22-Jun-2007 16:19

So, I eventually located 12 new definition documents ... right?

Posted by Thomas Mercer-Hursh on 25-Jun-2007 12:07

I just read the one on Presentation. I think it would be very valuable to either extend this paper or to provide a companion piece which talked about the MVP components is a bit more detail relative to how they are likely to be used for different UI types. I.e., what is a View actually going to contain if it is a traditional ABL GUI client versus a WUI client.

Posted by Mike Ormerod on 26-Jun-2007 08:52

Sorry for the tardy reply, but yes 12.

Posted by Thomas Mercer-Hursh on 26-Jun-2007 13:05

I really like the Design & Consequence section of these definitions.

Posted by Thomas Mercer-Hursh on 26-Jun-2007 14:13

Following on the one on Service Requester: Service Adapter in particular, another discussion that I would like to see is a mapping of architectural components, contexts, and the products that one would use to support them. E.g., a service adapter needing to communicate with a service provider which was on a different machine and which was utilizing a non-ABL technology would point one to a connection via Sonic. But, if both ABL then .... etc.

Posted by Thomas Mercer-Hursh on 26-Jun-2007 14:46

Following on the prior post and the one on Business Components: Service Interface, I think there are some interesting questions which one might raise, including:

You make a point of saying that the service interface is stateless. It is easy to see how it can be stateless between requests, i.e., there is no need for a memory of prior requests other than than which might be maintained by a context manager, as you note. However, it seems like one would actually want the call from a service interface to the appropriate business component to be async since otherwise it will be locked and unavailable for accepting other requests until such time as the request is complete. Do you have an architecture to propose for that?

And, of course, if one achieves async calls to the business components, does this mean that the identity of the requestor travels with the message or is it part of the context?

Posted by Thomas Mercer-Hursh on 26-Jun-2007 18:42

In Business Components: Business Workflow, most of the document seems to suggest that a Business Workflow is a component embedded in the Business Component layer. The exception being at the bottom of page 3 where you reference the simple solution of Sonic itineraries or the more complex one of using a BPEL engine. I get that one still has a conceptual Business Workflow element in the design, but I think it would be good to acknowledge that addressing this need with a component in the BC layer, addressing it on the bus, or some mixture of the two are really quite different solutions. This should show up in Design & Consequence particularly.

Posted by Thomas Mercer-Hursh on 26-Jun-2007 19:14

Business Components: Business Task is one that underscores questions about what products and architecture is under this conceptual structure when one gets to implementation, notably the concept of a Business Task as something that is stateless and persistent.

If we are in the context of a service, as in SOA, then it seems perfectly reasonable that the session would create some task objects and that these might persist so as to be readily available for the next use (back to this in a minute).

If, however, we are thinking in terms of something running on AppServer, would one persist a task object beyond a single invocation? Or, are you thinking in terms of running multiple AppServers, each specialized for a particular area so that having each session have a number of utilities running and ready to go could be sensible?

And, if tasks persist for re-use, what cleans them up? Do you need a task manager which kills off least recently used tasks after a certain time?

And, does the task manager categorize tasks? I.e., task A is performed frequently, so leave it running, but task B is performed only at month end, so kill it off when done.

Posted by Thomas Mercer-Hursh on 02-Jul-2007 11:32

Well, you knew I had to get there eventually, didn't you... In Business Components: Business Entity you are perpetuating the structure which I have argued against here ( http://www.oehive.org/OERAStrategies ) of separating the Business Entity from the Data Instance, thus separating the data from the logic and failing to encapsulate the data definition which must appear in both the Business Entity and the Data Access object.

I also think there is an overly simplistic expectation here that one can have a super Business Entity that incorporates all possible logic related to a Data Instance. This misses out on a lot of the potential for inheritance in OO. Take, for example, the classic example of a Order class which has two sub classes InternalOrder and ExternalOrder. How do you construct one Business Entity which covers that? Especially without losing the advantages of inheritance?

Posted by Mike Ormerod on 02-Jul-2007 11:56

I just read the one on Presentation. I think it

would be very valuable to either extend this paper or

to provide a companion piece which talked about the

MVP components is a bit more detail relative to how

they are likely to be used for different UI types.

I.e., what is a View actually going to contain if it

is a traditional ABL GUI client versus a WUI client.

If you've not already, I'd reccomend looking at the exchange session from John & Sasha : Arch-11 Building your presentation with classes which does go into more detail on MVP and the components.

Posted by Mike Ormerod on 02-Jul-2007 11:59

Following on the one on Service Requester: Service

Adapter in particular, another discussion that I

would like to see is a mapping of architectural

components, contexts, and the products that one would

use to support them. E.g., a service adapter needing

to communicate with a service provider which was on a

different machine and which was utilizing a non-ABL

technology would point one to a connection via Sonic.

But, if both ABL then .... etc.

Yes, I'd agree, and it's something we're considering at the moment.

Posted by Mike Ormerod on 02-Jul-2007 12:01

Following on the prior post and the one on Business

Components: Service Interface, I think there are some

interesting questions which one might raise,

including:

You make a point of saying that the service interface

is stateless. It is easy to see how it can be

stateless between requests, i.e., there is no need

for a memory of prior requests other than than which

might be maintained by a context manager, as you

note. However, it seems like one would actually want

the call from a service interface to the appropriate

business component to be async since otherwise it

will be locked and unavailable for accepting other

requests until such time as the request is complete.

Do you have an architecture to propose for that?

And, of course, if one achieves async calls to the

business components, does this mean that the identity

of the requestor travels with the message or is it

part of the context?

This also plays into a bigger picture with regards to the topic I also touched on at exchange, and that is being s service as part of a series, all of which are async calls, and as such how do you then handle that, plus transactions, etc. This is certainly an area we need to build out further as it poses lots of interesting questions and challenges.

Posted by Mike Ormerod on 02-Jul-2007 12:04

In Business Components: Business Workflow, most of

the document seems to suggest that a Business

Workflow is a component embedded in the Business

Component layer. The exception being at the bottom

of page 3 where you reference the simple solution of

Sonic itineraries or the more complex one of using a

BPEL engine. I get that one still has a conceptual

Business Workflow element in the design, but I think

it would be good to acknowledge that addressing this

need with a component in the BC layer, addressing it

on the bus, or some mixture of the two are really

quite different solutions. This should show up in

Design & Consequence particularly.

Maybe I should tweak the document slightly. Your correct in saying there are really two levels of workflow. There is workflow internal to the application or service being called, which maybe a composite of many other internal services. Then there is what you could consider as external orchestration of services across external services from different applications or providers. So maybe I should make that a big clearer.

Posted by Mike Ormerod on 02-Jul-2007 12:09

....

And, if tasks persist for re-use, what cleans them

up? Do you need a task manager which kills off least

recently used tasks after a certain time?

And, does the task manager categorize tasks? I.e.,

task A is performed frequently, so leave it running,

but task B is performed only at month end, so kill it

off when done.

To address exactly the issue you raise, we refer to the Service Manager. (http://www.psdn.com/library/entry.jspa?externalID=1111&categoryID=111). The SM, along with some form of configuration (be that a formal Configuration Manager or simply a text file), would determine the life of a conpoment, whether or not it should be destroyed after use, or should persist for the next call. Also it's in the SM that I'd expect to implement whatever form of caching rules, FIFO, least-used, whatever, to manage the pool of persistent components.

Posted by Mike Ormerod on 02-Jul-2007 12:13

Well, you knew I had to get there eventually, didn't

you... In Business Components: Business Entity

you are perpetuating the structure which I have

argued against here (

http://www.oehive.org/OERAStrategies ) of separating

the Business Entity from the Data Instance, thus

separating the data from the logic and failing to

encapsulate the data definition which must appear in

both the Business Entity and the Data Access object.

I also think there is an overly simplistic

expectation here that one can have a super Business

Entity that incorporates all possible logic related

to a Data Instance. This misses out on a lot of the

potential for inheritance in OO. Take, for example,

the classic example of a Order class which has two

sub classes InternalOrder and ExternalOrder. How do

you construct one Business Entity which covers that?

Especially without losing the advantages of

inheritance?

And of course, the OERA isn't code, it's a guideline so please implement how you;d like Now what's interesting is I had a conversation with someone at exchange who'd gone the route of having each record as an object. So each customer was it's own class. It was interesting, and maybe not suprising in someways, to hear that the big issues was performance once you got over a certain number of objects. So to my mind I think this goes against the grain of what the lagunage does best. But we've been round this discussion a few times I'm always ready to be shown a clever way of doing what you suggest in a performant and scalable way, so please post your code

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:21

That presentation does really deal clearly with how the pattern varies according to the type of client. There is reference to it, but the actual concrete example is all heavy GUI client. My point is that I think it would help people a great deal to have a discussion which covered multiple types of Presentation in order to illustrate how the pattern worked in different contexts. I.e., both so that one could see where a component needed to change versus where it could remain the same and where it might be located in a different part of the network.

That presentation also suffers from the fallacy of separating the Data Instance from the Business Entity.

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:23

Yes, and thanks for your Exchange presentation. I hope it makes it into a whitepaper soon because it was a good discussion of the issues.

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:25

And, of course, internal and external are really two different ways of potentially achieving the same thing. There are probably cases that should remain internal in all cases, but many others could be either, depending on the architecture.

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:28

I'll add that to my stack of reading material...

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:35

Recognize that having a Customer business object class is not necessarily mutually exclusive with having a CustomerSet class as well which has a TT or PDS internal to it. I think one can make an argument for that and I am currently reserving judgment based on testing. I think my instinct, if it is not a performance issue, is to use generic collections, but I can certainly see that if one is going to do heavy processing on a particular set, that having a specialized class for the set might well be compelling.

It would be interesting to hear where the performance issues they encountered came from and some details of their implementation. E.g., I know that there was some discussion a while back about using a one line TT inside a customer class in order to take advantage of WRITE-XML, but then there was also an issue that sessions became bogged down when there were thousands of temp-tables. Using properties and a hand-crafted serialization would not have that same issue.

It's the non-encapsulated data that I really object to.

Posted by Thomas Mercer-Hursh on 02-Jul-2007 12:52

It may not be code, but you have

created a basic definition which separates the Data Instance from

the Business Entity. That points very strongly to a specific code

implementation. If I implement a True OO model, then I am not

conforming to your definition. Hence my objection.

Posted by Thomas Mercer-Hursh on 03-Jul-2007 11:35

I've been thinking over night about this OERA isn't code business. At the level of the overall OERA diagram, you're right, it isn't code and there are many possible implementations which fit the overall structure. But, PSC hasn't stopped with the overall diagram. There are a bunch of whitepapers with both more detailed discussions which imply code as well as actual code samples. There is AutoEdge ... advertised as not being a production ready example, but a very concrete example all the same and unavoidably one that will be perceived of as a recommendation. And, these definitions may not include actual code, but they imply code in parts. The very vocabulary of Data Instance versus Business Entity implies two pieces, one with logic and one with data.

And, frankly, there should be code. Not rigid, prescriptive code and quite possibly several versions of code, but code none the less. Code is the concrete form in which people are most likely to be able to take in the concepts. Concrete examples are powerful teaching tools.

But, it really should be very good code. Saying that it is just illustrating an idea is not an excuse. If you want people to understand the ideas and develop best practices, then you need to provide good examples. If they aren't good examples then you are getting in the way of the teaching. Worse yet, you can even be guiding people to bad examples.

And, as I say, I think there needs to be different versions. There are lots of people out there with unrepentant procedural suites of applications who can benefit from OERA ideas, but who are not about to embrace OO. So, fine, they need examples that will fit within their programming approach. But, apparently there are also a lot of people who are looking at embracing OO and are either writing from scratch or considering transformations. PSC should be providing good models for them to do that.

What you are doing right now is a model that is a carryover from pre-OO days. It might have been all right or the best one could do without OO, but that doesn't make it the best possible OO model. What is the best OO model is something that needs serious attention and it needs it from people who "get" OO, not people who are just trying to re-invent the same structure with an OO finish.

Now, I think I have presented a very strong argument why TOO is better than ROH. Perhaps it will turn out with testing that there are some practical considerations which will argue against that. This would be a very important thing to know. If it is true, then it might mean that those who argue that ABL OO isn't ready for prime time are right.

I haven't seen any counter to my argument other than the idea that ABL has all these nice structures for handling relational data and so we ought to use them. This is equivalent to saying that the New UI allows us to create really nice UIs so everyone should be writing .NET heavy clients. Well, no, they shouldn't. In fact, there are a lot of reasons not to want any kind of heavy client. There are places where the new UI will be great and places where it is irrelevant. If packaging things in relational data sets was a be all and end all, why bother adding OO to the language?

Posted by Thomas Mercer-Hursh on 03-Jul-2007 12:31

A question about Business Entity, Business Task, and Transaction. You talk a little about the difference of Managed and Unmanaged and how one can rely on the native ABL transaction for the former.

But, isn't there an important third case where the data is ultimately in a managed data store, but at least part of that data is managed by another service. E.g., the Order Service gets a copy of a Customer from the AR Service and the customer says, "Oh, yeah, my address has changed" so the Order Service modifies the address and needs to send it back to the AR Service for persistence.

In a case like this, the Order Business Entity which contains the Customer data or the Customer Business Entity can start a logical transaction, but it shouldn't actually be starting an ABL transaction. Instead, it needs to adopt one of the mechanisms discussed in your Exchange talk for noticing whether the "transaction" completed normally or not.

So, the only case where it is possible to use an ABL transaction in the Business Entity is where the Business Logic layer and the Data Access layer are on the same machine and where the data is directly, i.e., locally managed by the Data Access layer component. This makes me wonder if one should be using ABL transactions in the Business Entity at all. Shouldn't the handling at that level be uniform, regardless of how and where the data is stored? Shouldn't one be able to switch the Data Access component from a local one to a facade for a remote source without changing anything in the Business Logic/Components level?

BTW, using an encapsulated Business Entity object instead of just passing a dataset largely gets rid of this issue since I don't believe that one would ever start an ABL transaction anywhere above the level of this object and it wouldn't start this transaction until it was returned to the Data Access layer for persistence.

Posted by Thomas Mercer-Hursh on 03-Jul-2007 12:32

While I'm at it, I'm not sure that I approve of the switch from calling it the Business Logic layer to calling it the Business Components layer. Seems non-standard for the industry.

Posted by Thomas Mercer-Hursh on 03-Jul-2007 13:25

In the Enterprise Services document you talk about the potential problem of having provided a service to a number of consumers and then needing to change the service to add a new parameter. You propose making this a new service to avoid disruption. Isn't this a good example of the reason for using XML to convey the data since one can usually add new data within an XML document and those who don't need it can continue to process the old fields without problem, but the service which needs the new data can be modified to access it?

Posted by Thomas Mercer-Hursh on 03-Jul-2007 13:54

One thing that I think might be a useful global addition to these definitions and to the general OERA discussion is to include more explicit references to the interface to the bus, when a bus is in use. I.e., it needs to cover cases of OERA structure where there is no bus, but when a bus is being used, it would be good to have some discussion of when and how it would be used. E.g., the references to Common Infrastructure all mention Context Management and Security Management, but don't mention the possible ESB connection. This would be particularly relevant in the workflow discussion we had earlier and in the Data Access context where it would help in pointing out the relevance of remote data sources that were effectively unmanaged, even if they were ultimately coming from a Progress database.

Posted by Thomas Mercer-Hursh on 03-Jul-2007 14:39

The Data Access pieces raise a couple of questions.

The first, obviously, is the use of Data Instance rather than Business Entity Object, but I've raised that already so I won't repeat myself ... yet.

It is interesting that pretty much all the way through you have made all of the objects stateless and singletons. In effect, the state is contained in the Data Instance which conveys state on the object in which it resides at the moment.

One of the obvious issues that this raises is that there seems to be no apparent opportunity for caching. Suppose, for example, that a particular session is an instance of an Order Processing Service, which needs Item information that comes from an Inventory Service. It is likely to be inefficient for the Order Processing Service to make an Item by Item request for information from the Inventory service so it seems that one might want to use some kind of cache. This might be a complete cache of the data created at startup or it might be an open ended cache based on each Item as it is requested or it might be a fixed size LRU cache.

In your terminology, would you implement such a cache as a special kind of Data Source which the Data Source Object then accesses?

Also, I think there is a certain irony that the typical motivator for making objects stateless is that one can then replicate them as needed to support greater load, but here everything is a singleton. Of course, in a single threaded session, replication wouldn't buy one anything.

I also confess that I keep having this overall architectural conflict based on my non-ABL experience with creating objects as services. In those contexts, one might create an object to be a source of Item data, for example, and depending on the nature of the data and the pattern of access one might either make it a singleton and allow it to cache or one might make it stateless in order to replicate it. But, in either case the Item object(s) would be providing this service to all the other objects on that node and, as we see when we implement this sort of thing with AppServer, a relatively small number of stateless agents can suffice to provide services to a very much larger collection of objects needing that service. Here, though, it seems like everything is going to be limited to the boundaries of a session except perhaps the link between the Presentation Layer and the Business Logic/Components layer.

This seems to me to be a key architectural problem. In particular, it seems unlikely that, if we are using AppServer, that we want to have to initialize everything every time in order to keep the agent completely general purpose since then there would be a huge overhead in startup with every call. Instead, it seems like we might want to do something like create a block of agents under each of multiple AppServers in which the agents for each AppServer were dedicated to providing a particular type of service and thus could have pre-initialized a set of stateless components which were those normally needed for supporting that service. Either that or we need a way for the AppServer agents to be very lightweight facades which then pass work on to pre-initialized sessions in some way. What way is the question.

Posted by Admin on 04-Jul-2007 14:05

Hi guys

Most of the time the fun starts when you dive into the details. This can either be done by implementing a piece of the design or get more concrete in the design.

The argument of using a "business task" is to get a grip on the cross business entity object invocations. But the communication needs to be there, so instead of business entities talking to each other, we have business tasks talking to each other. I don't see a real advantage there.

Lets take the real life sample where the address is a normalized piece of data and a your purchase order has a delivery address. You could have modelled the data like this:

- you have customers

- customers have different types of addresses

- you create a link between order and customer address

In the proposed architecture you have at least the order entity business object, the customer BO and the address BO. There will be at least two tasks, order maintenance business task and customer maintenance. So storing an order means invoking the order maintenance business task. This tasks calls the customer maintenance business task to validate the address, since that can be rather complex logic. Finally my point: how can the order business entity guarantee data integrity, when it doesn't check the address itself? Why should it trust the business task?

Furthermore I don't believe in managing distributed transactions in 4GL logic, creating a compensating transaction, etc. I think life is complex enough with the current 4GL and it's implicit transactional behavior. I would like to be convinced otherwise of course -;)

Posted by Thomas Mercer-Hursh on 04-Jul-2007 16:32

You seem to assume that any given table in the DB will be represented by a separate business entity in the business components layer and that one will couple those together with tasks to perform operations which span tables. In your example, I don't think one would have any more than the Order business entity and any updates to address or customer would happen in the data access layer when it was time to persist the changes. In the data access layer one might well have multiple data storage objects for the three, especially if some of the data was not local.

Download Mike's talk. In a distributed environment, you really can't use ABL transaction scoping. You have to do something else.

Posted by Admin on 05-Jul-2007 00:48

You seem to assume that any given table in the DB

will be represented by a separate business entity in

No that's what you assume. An order is a complex object as well as an customer and an address. They don't have to map to one table.

operations which span tables. In your example, I

don't think one would have any more than the Order

business entity and any updates to address or

customer would happen in the data access layer

So where do you put the validation logic for the address? Where do you stop in the order entity? You clearly haven't thought about that yet...

Download Mike's talk. In a distributed environment,

you really can't use ABL transaction scoping. You

have to do something else.

I don't have to download it, I know that from experience...

Posted by Thomas Mercer-Hursh on 05-Jul-2007 10:16

I didn't have to go

to it either, but it was still worthwhile.

Posted by Admin on 05-Jul-2007 13:57

Actually, I have and there are quite a few different

possibilities depending on design, including a

validation service. Some might involve workflow,

some would be internal to the BE, depending on the

business requirements.

That's not very concrete. There are two possibilities with the OERA design:

1) either you put that logic in your BE, which means duplicating that code from elsewhere

2) or your BT (task) is responsible for it, so it will do a cross task call

According to the BT-restrictions, it can't call the layer above, so it can't reach another service. The sandbox design restricts it to talk to entities or other BT's in it's scope.

Now, with this knowledge, where are you going to implement a complex validation routine like validating the delivery address? The address logic is in another logical partition ("sales order" versus "customer", which also plays the role of "debtor").

Posted by Thomas Mercer-Hursh on 05-Jul-2007 14:04

Why do you think that a business task can't call another service? It can't call another task, but that doesn't keep it from calling another service.

Posted by Admin on 05-Jul-2007 14:33

Please be concrete and specify "service". The design documents demonstrates the sandbox: BT calls BE, BE doesn't call back. A BT can implement a service interface. Also from BT-document:

"...

In order to expose its methods publically, i.e. to a Service Requester, the Business Task is dependent upon a Service Interface, and it is only through a Service Interface that a Service Requester or Client can access the functionality of the Business Task

..."

So effectively a BT can't be sure it calls other BT's, since it needs to use the service requester to get hold of a BT. So this outbound call to something else can result in a workflow call, one level up in another service. Here you have the same effect the OERA-design tries to solve with the BT, avoiding BE cross communication.

Posted by Thomas Mercer-Hursh on 05-Jul-2007 15:36

Figure 4 of the Service Requester: Service Adapter document shows the Business Components acting as a Service Requester calling out to Enterprise Services. E.g., Order Processing needing to call out for a credit card authorization.

Posted by Admin on 06-Jul-2007 00:25

Exactly, so you will have a parallel transaction without even knowing it...

Posted by Mike Ormerod on 06-Jul-2007 07:34

In the proposed architecture you have at least the

order entity business object, the customer BO and the

address BO. There will be at least two tasks, order

maintenance business task and customer maintenance.

Why two tasks? What would this not be one task, that simply orchestrates across the BE's that you've identified? Each Be would still be responsible for it's own functions, so the customer BE would be responsible for maintenance of the customer details, etc. Not saying you couldn't do it as two tasks, I just don;t see why you would.

Posted by Mike Ormerod on 06-Jul-2007 07:41

Please be concrete and specify "service". The design

documents demonstrates the sandbox: BT calls BE, BE

doesn't call back. A BT can implement a service

interface. Also from BT-document:

"...

In order to expose its methods publically, i.e. to a

Service Requester, the Business Task is dependent

upon a Service Interface, and it is only through a

Service Interface that a Service Requester or Client

can access the functionality of the Business Task

..."

So effectively a BT can't be sure it calls other

BT's, since it needs to use the service requester to

get hold of a BT. So this outbound call to something

else can result in a workflow call, one level up in

another service. Here you have the same effect the

OERA-design tries to solve with the BT, avoiding BE

cross communication.

Then I think there is some confusion here!! Sure a BT is dependent on an SI to expose it's methods or services to the outside world, i.e. a Service Requester. But it's not dependent on a SI to expose it's methods to other BT's. Where it's gets a bit more complicated is if the BT needs to call another BT, which is a service on a different system, then we need to use the Service Adapter to call out.

Posted by Mike Ormerod on 06-Jul-2007 07:42

Exactly, so you will have a parallel transaction

without even knowing it...

Well hopefully someone knows that an external servcie was called, and why

Posted by john on 23-Jul-2007 11:42

Following on the prior post and the one on Business

Components: Service Interface, I think there are some

interesting questions which one might raise,

including:

You make a point of saying that the service interface

is stateless. It is easy to see how it can be

stateless between requests, i.e., there is no need

for a memory of prior requests other than than which

might be maintained by a context manager, as you

note. However, it seems like one would actually want

the call from a service interface to the appropriate

business component to be async since otherwise it

will be locked and unavailable for accepting other

requests until such time as the request is complete.

Do you have an architecture to propose for that?

I'm not sure I follow. At least in the straightforward case, the Service Interface and the Business Component(s) it provides access to are on the same session (on an AppServer, for instance). So the SI is part of the same execution thread as whatever it executes. Now if the request in fact becomes a remote request to yet another session, then async could be a valuable addition to the implementation. Another variant of this could be if the service requester (an OE client, for instance) is making potentially multiple requests of potentially multiple other sessions (AppServers, for instance). Then it could be a valuable extension to the simple client-side Service Adapters shown to allow their requests to be async. This is a good area for a concrete example, especially since I suspect that fairly few ABL programmers have worked with async AppServer requests, a useful and under-advertised product feature.

Posted by john on 23-Jul-2007 12:00

Business Components: Business Task is one that

underscores questions about what products and

architecture is under this conceptual structure when

one gets to implementation, notably the concept of a

Business Task as something that is stateless and

persistent.

Yes, definitely one of our important ongoing jobs is to associate more explicitly various parts of the Architecture with the growing set of products that can contribute to a solution.

If we are in the context of a service, as in SOA,

then it seems perfectly reasonable that the session

would create some task objects and that these might

persist so as to be readily available for the next

use (back to this in a minute).

If, however, we are thinking in terms of something

running on AppServer, would one persist a task object

beyond a single invocation?

Yes, it can often make sense to persist a task object for reuse. Keep in mind that the generally recommended configuration is to set up a pool of stateless AppServer sessions managed by a broker; this looks like one 'connection' to any client sessions running stuff on the AppServer. This is why you can't reasonably save state in memory in some running class or procedure on an AppServer: when the next request from that client comes along, it may go to another instance of that class or procedure running in a completely different session. So state has to be saved where it can be retrieved by any session (such as in a database) or passed back and forth to the requesting client. This also is why it can make sense to persist (that is, leave running) a class or procedure that does some complex job and may be relatively expensive to start up -- it can be there to serve any number of unrelated requests. Since a given class or procedure instance is not saving state in memory on behalf of some particular client, there's no particular need to have more than one copy of the thing running in a given AppServer session -- hence the notion of a 'singleton'.

Or, are you thinking in

terms of running multiple AppServers, each

specialized for a particular area so that having each

session have a number of utilities running and ready

to go could be sensible?

Well, again, each 'session' is just one of potentially many that the broker chooses among to do load balancing, so it isn't useful to have utilities running in a single session. But you could set up numerous pools of AppServer sessions that client could connect to, each with its own set of responsibilities.

And, if tasks persist for re-use, what cleans them

up? Do you need a task manager which kills off least

recently used tasks after a certain time?

Yes, you need a Session Manager of some sort to start things up when they're first needed, keep track of procedure handles or object references to locate them for later requests, and possibly kill them off when they're no longer needed or haven't been used for a while.

And, does the task manager categorize tasks? I.e.,

task A is performed frequently, so leave it running,

but task B is performed only at month end, so kill it

off when done.

This can make good sense, yes.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 12:08

Ultimately, of course, what is wanted here is a multi-threaded session. If not that, then at least a way in which multiple sessions can interact in a performant fashion, if not with actual exchange of objects, then at least with serialized object data. I can't seen to keep from thinking in these terms since it is so apparent that it is what one should have, but then periodically will have to give myself a kick and say, "Oh, I guess maybe it doesn't matter since there is only one thread". Still, I hate the idea of designing around that limitation, especially since one may well end up publishing some of this stuff as a service. Service adapter ... service ... seems like there ought to be some kind of relationship there.

Posted by john on 23-Jul-2007 12:14

In Business Components: Business Entity

you are perpetuating the structure which I have

argued against of separating

the Business Entity from the Data Instance, thus

separating the data from the logic and failing to

encapsulate the data definition which must appear in

both the Business Entity and the Data Access object.

Well, the Data Instance, for better or worse, must be passed without its supporting logic, at least when it's passed as a parameter to another session. But you can still encapsulate it as fully as you like within its Business Entity, not allowing direct access to its tables or however else the data is expressed except through a well-defined API. Relating to a prior response, the BE instance (or any other supporting logic) can be left running to support a sequence of data requests that pass through it, so it holds a particular Data Instance (DataSet or other data representation) only on a transient basis.

The separation between BE and DAO is a somewhat separate concern. We agree that having the convention of a common definitional include file used by multiple objects is less than ideal, but it's important to separate the object that has to understand how to map to whatever the physical data source is from the object that expresses business logic in a way that is independent of where the data comes from.

I also think there is an overly simplistic

expectation here that one can have a super Business

Entity that incorporates all possible logic related

to a Data Instance. This misses out on a lot of the

potential for inheritance in OO. Take, for example,

the classic example of a Order class which has two

sub classes InternalOrder and ExternalOrder. How do

you construct one Business Entity which covers that?

Especially without losing the advantages of

inheritance?

The expectation isn't simplistic, but perhaps the examples to date are. There's nothing to keep you from defining various subclassed objects such as InternalOrder and ExternalOrder, each of which inherits some amount of common behavior from a common super class or classes. There can be any number of different classes and procedures that together express some complete body of support. In such a case, the Session Manager could treat InternalOrder and ExternalOrder as independent classes, and the Service Interface for Order could make differentiated requests to one or the other based on the nature of the request.

Posted by john on 23-Jul-2007 12:30

Not to ignore the rest of this substantial message, but focusing on the final question:

If packaging things in relational

data sets was a be all and end all, why bother

adding OO to the language?

Packaging things in relational datasets is certainly not a be-all and end-all. But we still feel that support for DataSets is an important and valuable product feature that hasn't at all been superceded by the support for classes in the language. I'll make the assertion again that having a robust relational way of representing, manipulating and transporting data that in most cases comes from a relational database still has great fundamental value. The support for classes in ABL also has great value for reasons that largely need not conflict with a relational view of (much of the application) data. Classes provide strong typing, definitional inheritance, enforcement of contracts through interfaces, and other advantages. And you can fully encapsulate data held in a temp-table or a DataSet through accessor methods. In the present version of the product, you have to extract the data to pass it to another session as a parameter, but there you can reinsert it into a class that is similarly prepared to provide the proper level of access to it. It still seems like a valuable combination of capabilities.

Posted by john on 23-Jul-2007 12:54

A question about Business Entity, Business Task, and

Transaction. You talk a little about the difference

of Managed and Unmanaged and how one can rely on the

native ABL transaction for the former.

But, isn't there an important third case where the

data is ultimately in a managed data store, but at

least part of that data is managed by another

service.

In a case like this, the Order Business Entity which

contains the Customer data or the Customer Business

Entity can start a logical transaction, but it

shouldn't actually be starting an ABL transaction.

Instead, it needs to adopt one of the mechanisms

discussed in your Exchange talk for noticing whether

the "transaction" completed normally or not.

So, the only case where it is possible to use an ABL

transaction in the Business Entity is where the

Business Logic layer and the Data Access layer are on

the same machine and where the data is directly,

i.e., locally managed by the Data Access layer

component....

Yes, this is a big topic that definitely needs more filling out, but basically we can say that the ABL transaction can handle the 'default' case where all the data is 'managed' and locally accessed. Beyond that, the implementation needs to define a mechanism whereby any part of the update can effectively throw an exception if it fails. Parts of a complex update that are under transactional control can be undone; others may need custom support for reversing any changes that have already happened.

Posted by john on 23-Jul-2007 13:10

It is interesting that pretty much all the way

through you have made all of the objects stateless

and singletons. In effect, the state is contained in

the Data Instance which conveys state on the object

in which it resides at the moment.

One of the obvious issues that this raises is that

there seems to be no apparent opportunity for

caching. ... This might be a complete cache of the

data created at startup or it might be an open ended

cache based on each Item as it is requested or it

might be a fixed size LRU cache.

In your terminology, would you implement such a cache

as a special kind of Data Source which the Data

Source Object then accesses?

Once again we have to keep in mind that a particular AppServer session cannot realistically hold a cache of data in memory on behalf of other sessions, which may be routed to other sessions to satisfy a particular request. To create a cache on the AppServer side you have to put it into a shared data store -- something like a database, for instance -- to be accessible by all. If the data came out of a database in the first place there would not be a lot of point in doing that unless the data was substantially massaged in a way that would not already be represented in the database it came out of in the first place. If the data instead comes from some other service (as in your example, which I elided), then a pool of AppServers performing some set of supporting behavior could cache the data 'closer' to the application service requesters, but again it would have to go into a database or other data store. In this case the cached data could of course act as a Data Source of its own.

If on the other hand you are talking about a client session or other session that needs a cache of data for its own ongoing use, then it could certainly cache data in its own memory -- all the Items or Customers that a user had referenced since the client session started up, for instance -- to avoid having to re-retrieve them (subject to staleness issues and so forth).

Posted by john on 23-Jul-2007 13:19

The Data Access pieces raise a couple of questions.

(Major elision for the first of those questions, addressed in a prior response)

... Here, though, it seems like

everything is going to be limited to the boundaries

of a session except perhaps the link between the

Presentation Layer and the Business Logic/Components

layer.

This may be another case where the limitations of an initial set of samples and the discussion that accompanies them may create false impressions. There is no limitation of a single 'client' session and a single (pool of) AppServer session(s). One object running in an AppServer can access other behavior through other AppServers as needed; a single client session could make requests of multiple AppServers (even concurrently, if the requests are asynchronous).

This seems to me to be a key architectural problem.

In particular, it seems unlikely that, if we are

using AppServer, that we want to have to initialize

everything every time in order to keep the agent

completely general purpose since then there would be

a huge overhead in startup with every call.

That's very true (not the architectural problem, I don't think, but the proposed technique of dealing with the situation).

Instead, it seems like we might want to do

something like create a block of agents under each

of multiple AppServers in which the agents for each

AppServer were dedicated to providing a particular

type of service and thus could have pre-initialized

a set of stateless components which were those

normally needed for supporting that service.

Exactly. Components that support the range of services managed by a particular block of AppServer agents/sessions could be prestarted or could be started on demand by a Session Manager, which then keeps track of them and cleans up as needed.

>Either that or we need a way for the AppServer agents to be

very lightweight facades which then pass work on to

pre-initialized sessions in some way. What way is

the question.

This could also be done, I suppose, in the case where a number of different blocks of AppServer agents have been allocated to different parts of the overall application support and need another level of support themselves.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 13:24

Is this an architecture which PSC is

actively promoting? Are people currently building deployments this

way? BTW, I think there is some very interesting potential in

figuring out ways for pools of agents to interact not quite so

independently. Saving information in a local database is certainly

one option, but as was mentioned in the ABL Info Exchange, there

are interesting cases like large queries where one might actually

like to save server-side state. One way to do this would be to have

a highly performant inter-session link capability so that one could

have session which were not agents performing some of these

services and the agent could simply link to the service.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 13:40

This separation is

easily accomplished by wrapping the data in an object. In fact, I

would mind a whole lot less if you advocated creating an object

with all the same contents as the current BE, but called it

something like the Business Logic Object and passed an object to it

which contained little more than the data. My inclination at this

point is to have two types of data objects, one for a single

instance and one for a set, where the set one is only implemented

when there is a business case for it. The single instance uses

properties, no TT or PDS, and thus is very lightweight and compact.

The set instance uses a TT or PDS depending on the complexity and

has methods which will deliver and receive single instance objects

as needed. I can't say that I entirely approve of this separation

of logic and data, but at least they would both be encapsulated.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 13:54

BTW, John, welcome back from your trip or vacation or wherever it is you have been. You seem to have returned ready to engage, which is a a good thing!

I don't think you are getting any argument on this point from anyone, least of all me. I think there are only two points where we differ. One is that I think that a TT or PDS is too heavy an implementation when dealing with a single instance. The other is that any TT or PDS should be encapsulated in objects, not passed on their own. Simple as that. Encapsulate and there is a single point of definition and no coupling. Pass the raw TT or PDS and you need multiple points of definition and one has coupled the two objects.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 13:57

Just a small observation. When discussing BE and DAO,

you seem to argue in favor of using a PDS because of the potential

for crossing a session boundary. Here you seem to argue that the

common case is that the BE and DAO are in the same session where it

is legitimate for the BE to start an ABL transaction. Well, if that

is true, then there is also no problem about passing an object

handle between them either.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 14:09

To create a cache on the AppServer side you have to put it into

a shared data store -- something like a database, for instance

-- to be accessible by all. If the data came out of a database in

the first place there would not be a lot of point in doing that

unless the data was substantially massaged in a way that would not

already be represented in the database it came out of in the first

place. One of the ideas which I think would be

interesting to explore might work something like this. Suppose a

pool of AppServer agents which do work related to Order Processing

and another pool which do work related to Inventory, possibly

co-located, possibly not. The Order services need to use certain

data and actions from the Inventory service, so let's imagine them

conceptually linked by an ESB ... details to be forthcoming. Both

have their own database. Now, the Order database can also have a

set of Item data, but of course, this data is not authoritative. To

handle this one might imagine a session which served as an Item

facade. It would cache data in the local database and lazy load new

items not in the cache and would serve as an interface to the

Inventory service for actions so that, for example, when the Order

service requested allocation of stock to an Order, the Item facade

object would pass the request to the Inventory service, receive the

response, and thus update its own cache with new data. On the

Inventory end there might be an object which kept track of which

items were currently in the Order cache and publish an ItemChanged

message if there were updates from another source. There are some

details to sort, but the big missing piece here is a mechanism for

highly performant communication between the agent and the facade

object.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 14:13

Are there sites currently using

this sort of architecture with multiple AppServer pools and

AppServer to AppServer calls? I asked something about this on the

PEG some months back and got a fairly negative, though vague

response. Is it an architecture you are advocating? If so, I think

a whitepaper would be timely.

Posted by Mike Ormerod on 23-Jul-2007 14:56

Are there sites currently using this sort of

architecture with multiple AppServer pools and

AppServer to AppServer calls? I asked something

about this on the PEG some months back and got a

fairly negative, though vague response. Is it an

architecture you are advocating? If so, I think a

whitepaper would be timely.

I've certainly seen sites do this, and indeed this is something I've done in the past. One common usecase for this is to handle the login process on one appserver pool, and then when a user is validated hand them over to the application pools. This example of separation is often used to enhance security as the application db isn't connected to the login and potentially reduces the risk from hacking etc.

Posted by Mike Ormerod on 23-Jul-2007 15:08

One of the ideas which I think would be interesting

to explore might work something like this. Suppose a

pool of AppServer agents which do work related to

Order Processing and another pool which do work

related to Inventory, possibly co-located, possibly

not. The Order services need to use certain data and

actions from the Inventory service, so let's imagine

them conceptually linked by an ESB ... details to be

forthcoming.

There is also a different approach that wouldn't need the ESB, regardless of if the appservers are co-located or not, just as long as you can connect to them. You define an appserver to handle the service interface and business tasks (or workflow, as that's what we'd be handling if we have one or more BE's). As part of the task performing it's job, it could happily call the relevant BE's on the associated appservers. In this configuration the data would be authoritative as the task would be orchestrating against the relevant appservers.

So my main point is that there are multiple ways to architect and deploy this type of arrangement, it will very much depend on what type of connection capabilities you have and how you orchestrate across them.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 15:08

Are you talking about one login pool and one application pool or multiple application pools. Does the use connect to N application pools per need? Does one pool make calls on other pools?

This is certainly something I would like to know more about in terms of concrete examples and performance.

Posted by Thomas Mercer-Hursh on 23-Jul-2007 15:11

Certainly, AppServer to AppServer calls would be another alternative, but it seems to me that when one is clearly getting to the service to service communication, the inclination is strong that this is the right place for an ESB ... assuming that we get past the licensing problem, of course ... even if it is on the same box.

Posted by john on 24-Jul-2007 14:21

When discussing BE and

DAO, you seem to argue in favor of using a PDS

because of the potential for crossing a session

boundary.

Not particularly, though that's certainly possible. The primary motivation in separating BE from DAO is to separate that part of the application that has to know about physical data mapping from everything else. Crossing a session bounday is something that seems more likely between the BE and whoever requested the data or some activity on the data in the first place, and as we have optimized temp-tables and DataSets for that purpose, and made that data transfer and retention of side information like relationships fairly consistent for OE clients, .Net clients, and Java clients, this is a main reason why we show DataSets as the typical data representation. That and being able to convert easily back and forth to XML (with or without schema information).

Here you seem to argue that the common

case is that the BE and DAO are in the same session

where it is legitimate for the BE to start an ABL

transaction. Well, if that is true, then there is

also no problem about passing an object handle

between them either.

Yes, you can pass an object handle between them, but how do you organize the separation of physical data access logic -- data sources and all that -- from the rest?

Posted by john on 24-Jul-2007 14:24

BTW, John, welcome back from your trip or vacation or

wherever it is you have been. You seem to have

returned ready to engage, which is a a good thing!

I've been around generally, but distracted with other things. And meanwhile our LDAP service seems to have lost track of my full name. Is it vain of me to expect that people will know who 'john' is?

Posted by john on 24-Jul-2007 14:28

If one is running an ERP application

with 15 different modules and using a single large

pool of AppServer agents, then it seems unlikely that

one will want to leave a module specific to AR

instantiated in any one agent since the next client

to use that agent might be running payroll. If,

however, one had 15 different pools, then the

likelihood of re-use would be dramatically higher.

I'm not sure how actively people take advantage of this design capability, but it is certainly intended. A large application certainly can and probably should divide up the labor among multiple APpServer pools with different responsibilities. This is, after all, why asbroker1 is not the only possible AppServer name. This may indeed be something we need to promote and inform better on.

Posted by john on 24-Jul-2007 14:57

When you are crossing a session boundary or going

across the wire, yes, you can't pass logic. While

this seems like a limitation, I'm not sure it is such

a bad thing since preliminary indications seem to be,

at least for transmission across a network, that the

serialized data can actually be transmitted more

efficiently than even a PDS or TT. But, this

certainly doesn't apply within a session.

I'm not sure what you mean here by serialized data. We serialize DataSets and temp-tables as we pass them across the wire (we have to, of course, to pass them across a generic network connection). If you represent data strictly as a set of properties -- and there's nothing at all wrong with doing this per se -- then you have to take charge of the serialization, as well as expressing all the business logic against those properties, rather than allowing a certain degree of internal access within the business logic to the data in a relational format -- while encapsulating it from the rest of the world by not exposing temp-tables or DataSets directly to other objects.

In fact, I would mind a whole

lot less if you advocated creating an object with all

the same contents as the current BE, but called it

something like the Business Logic Object and passed

an object to it which contained little more than the

data.

Well, now I'm a little confused, because precisely what a Business Entity is is a business logic object (we could call it that, but people would giggle when we abbreviated it...) and we do pass an object to it which contains little more than the data -- either a DataSet or some other representation (such as XML) that suits your purposes. The one part of the architecture here where the raw data is concerned is that there has to be a design discipline that the raw DataSet object (if that's what you use) can't be messed with directly as it is passed between the components that encapsulate it in different ways for different purposes. The DAO applies one type of logic to it (physical to logical mapping and validation logic requiring knowledge of the physical data source). The BE (or task or whatever) applies another category of logic to it which treats it strictly in terms of its logical in-memory definition. A client is likely to apply another logic variant to it. But the temp-table or DataSet format allows for relational manipulation of the data -- from within the business object that holds it -- allows for optimized serialization across the wire to known requesters (whether OE, .NET, or Java), allows conversion to and from XML for use in a broader SOA or for other forms of manipulation of the data, and when it gets to a client that wants to use the data in a UI, allows for easy mapping of the data elements to display fields and grids and the like.

Maybe part of the disconnect between us is that we think of the passing of the data between sessions as being very central to the design and something to be optimized for (in terms of simplicity, not just speed alone), and you are thinking more of the interaction between objects running in the same session? Within a session, I agree completely that data should be encapsulated wherever it is held -- and that encapsulation is done through BE's and other objects that can provide access to the data through methods and properties.

My inclination at this point is to have two

types of data objects, one for a single instance and

one for a set, where the set one is only implemented

when there is a business case for it.

This could be a very valid approach. Use cases such as 'browsing all orders for a customer' are not exceptional but do require special treatment.

The single

instance uses properties, no TT or PDS, and thus is

very lightweight and compact. The set instance uses

a TT or PDS depending on the complexity and has

methods which will deliver and receive single

instance objects as needed.

This is fine if that's how you want to represent things. But a TT with one row and schema information turned off when it's transmitted as a parameter is about as lightweight and compact as you can get -- it's basically just the bits in the row. And again, if you want to pass data to another session using any other technique, you have to do all the work of serializing and deserializing it.

Posted by Thomas Mercer-Hursh on 24-Jul-2007 14:59

It seems to me that there is some kind of communication gap here, although I'm not sure of the source. We certainly have no difference of opinion about the virtue of separating BL and DA ... either of us pointing that out to the other is preaching to the choir. The question is, what mechanism does one use.

It seems to me that we have two cases. One in which the DA and BL are local to each other. To me, this is the normal case since, even if the actual data is coming from a remote system, one would have a local façade object serving as the proximate source. If the source is in the same session, there is no obstacle to the exchange being nothing more than passing a handle. This is achieved either by passing a PDS with the appropriate qualifiers or by passing an object, possibly an object encapsulating a PDS. There is really no difference in efficiency here, it is just that passing an object encapsulates the data so that the definition doesn't need to reside in more than one object. Using an object also provides the option of using an object with properties for the single instance case, which is very lightweight.

The other case is where the DA and BL are not local to each other. I don't think this makes sense with an ESB in between since, as noted, I still think there should be a local façade object so the connection across the ESB will not happen between the DA and BL layers. So, the only case where I see a possible separation is if the BL is in the client and the DA is in the AppServer. I can't say that is a design which I would advocate in any case, but even if I were inclined to create a heavy client with its own BL layer, I would again use a façade object to provide the DA layer in the client. In this case, we have no option to send an object across the wire currently, but in the interest of minimizing traffic, I'm not sure that we actually want to send any more than data in any case. Thus, the only reason to prefer a PDS over XML for this transmission is because the PDS was more compact. This is testable, but it seems that the more functionality you put into the PDS, i.e., the less it is just default methods, it has to be more than sending just the data. And, after all, if the PDS has some complex FILL() logic, why does one want that on the client where there is no database?

Yes, WRITE-XML and READ-XML are cool things. It would be very nice if you provided the same functionality for object properties. But, it is also dead simple code to write or generate.

To me, it seems that if you have the PDS definition on both sides of the DA/BL barrier, then you have more coupling between layers than if you pass an encapsulated object between layers. The only ambiguity about layers here ... if you want to call it that ... is that the data object passes from one layer to the next rather than existing only in one layer, but I don't see how one can consider that to be more ambiguous than passing a pseudo-object, i.e., a PDS, especially since the definition of the PDS needs to exist on both sides.

Posted by Thomas Mercer-Hursh on 24-Jul-2007 15:58

Well, I knew who you were, but it might be nice for anyone new to see the full name. Probably one of those impossible to fix things, though.

Posted by Thomas Mercer-Hursh on 24-Jul-2007 16:01

I think both a whitepaper and a case study would be

useful (could be combined). It would be especially interesting in

combination with ESB. In fact, I think it could be very interesting

to do a whole series of case studies where a particular

architectural challenge has been tackled well. I would think the

consulting folks could come up with some fairly easily.

Posted by Thomas Mercer-Hursh on 24-Jul-2007 16:44

Well, now I'm a little confused, because

precisely what a Business Entity is is a business logic

objectwe do pass an object

to it which contains little more than the data -- either a DataSet

or some other representation (such as XML) that suits your

purposes. And, as I have said many times, I

have no problem whatsoever with using these language features where

appropriate, i.e., inside an object which encapsulates their

behavior. If you are considering the BE and the PDS as separate

"objects", then a very fundamental OO principle is violated because

the BE is aware of the internal structure of the PDS. Wrap that PDS

in an object and all it knows about is the contractual signature.

I think there are two

separate principles or questions here - optimized serialization and

communication with other technologies. On the communication front

it is certainly true that a PDS provides a simple mechanism for

exchange with these other technologies, although my understanding

is that their implementation is a bit of a poor cousin, so it isn't

all quite as simple as it seems. But, it certainly requires knowing

what the client is. If the goal is client agnostic communication,

clearly XML is the better choice. As for optimization, I suppose

one has to grant that TT and PDS are optimized in terms of required

lines of code (although you could erase that difference by

providing these methods on objects), but I think it is a bigger

question of whether they are optimized in terms of performance in

the real world context. Anecdotal evidence from Greg Higgins

suggests that serializing a PDS to XML, transmitting the XML, and

deserializing back to a PDS at the other end was actually faster

than sending the PDS over the wire. It is possible this depends on

the complexity of the PDS. In any case, it is certainly testable.

But, it is only an argument in favor of sending PDS if the

difference is in favor of the PDS and if the difference is

meaningful.

Well, given that I have been thinking in terms of ESB-like

architectures for a good dozen years now, I don't think that I am

stuck thinking in terms of single session. I am certainly aware of

the difference and that different rules may apply. But, I think

that any session in which there is BL, there should also be a DA

layer, even if that layer is a façade for a remote

source. If that is followed, then the interface between BL and DA

is never remote. But

your examples don't encapsulate the data ... they require the

data structure to be identically defined in two different places at

a minimum. Wrap that PDS in an object and pass the object and that

characteristic disappears. You still would have the data and logic

in different objects, but that is a different argument and one with

some more complex issues.

Posted by Mike Ormerod on 25-Jul-2007 08:07

>

But your examples don't encapsulate the data ...

they require the data structure to be identically

defined in two different places at a minimum. Wrap

that PDS in an object and pass the object and that

characteristic disappears.

So at the risk of sounding obtuse, why don't you provide the community with an example of what your proposing? I for one would be curious to see this as an actual example.

Posted by Thomas Mercer-Hursh on 25-Jul-2007 11:19

Providing some framework and models is very much on my schedule, but I'm a bit busy at the moment with the ABL to UML project, so I don't know when I will get to it.

Which said, look at virtually any OO development in any language anywhere and what you will find is domain objects which encapsulate data and logic. It is doing anything else which needs the careful defense and scrutiny.

This thread is closed