Settling on an architecture

Posted by Alon Blich on 05-Apr-2007 22:16

My architecture for idiots (I like this name for many reasons)

The main points in guiding the design of the architecture are -

Before anything else, it should be easy to understand and simple to write. It has to be practical before anything else.

Secondly, it needs to be flexible and suitable for real world challenges.

The main design decisions -

1. DATA-SOURCE object will not be used in the implementation.

All the preTransaction, postTransaction etc. validation hooks will also not be in the implementation. Like the DATA-SOURCE object, I think, they're mostly an unneeded complication, and they remind me too much of the things I didn't like in ADM2.

In general, the dataset will be filled by one single query, and changes will be saved using a single procedure containing all the validation logic needed.

There will be one general purpose fetchData( ) method and possibly several fetchDataByXXX( ) methods that will use a fixed join order and indexes, mostly for use with browse widgets (or similar) where scrolling is needed.

There will be one general purpose saveChanges( ) method and possibly several specialized operation methods, for example, deleteXXX( ), newXXX( ) etc.

2. There will only be BE's, DAO's will not be used in the implementation. The BE will also be incharge of mapping the internal view to the physical store and hiding the physical implementation.

Although it is a good theory that there can be several DAO's for different types of data stores, for example, Oracle, SQL Server or even unmanaged data I really don't see us using this or almost any other 4GL application not with a Progress database.

Other reasons for using DAO's, like, using several data sources don't seem reason enough to justify their use. The BE does a good enough job of abstracting and localizing the physical implementation, promote code reuse etc.

3. The security and context management will mostly be moved to the interface layer. The BL will for the most part not have it's own security or context built-in.

The way I figure it is -

With a GUI interface the users are mostly (I know not always) in a controlled environment on a LAN, there's isn't much chance of hacking inside this zone. So I don't think the AppServer access needs it's own beefed up security.

With a Web interface all users access the application through the interface (and also through a single point entry, i.e., the web-disp.p) and I don't see a need to independently protect the BL running either locally or on a separate AppServer. And likewise for web services. I mean, you usually put security in the entrance

4. Because of that a Service Adapter/Interface will not be used in the implemented. BE's will be either runned locally or remotely through a proxy making dynamic calls to their direct full BE (similar to the Dynamics implementation). BE's will be super procedures.

In the reference implementation it seems that the BL is represented as a much bigger and almost independent part of the application and the UI as a small/thin shell. I think, in this implementation the BL, well, only manages the data and the UI which are the menus, lots and lots of screens is still a very big or even the biggest part of the application.

5. Dedicated reporting and BI will mostly be done using SQL views, stored procedures etc. Although, the biggest problem is the lack of experience in the 4GL community with SQL and the need for possibly acquiring different skills.

Microsoft Excel will also be highly dependent upon for the reporting presentation. We've been using Excel's advanced reporting features increasingly in the past year and we've been very happy with the results. And not least due to the ubiquity, familiarity and attitude that users have towards the product.

It's also a much less expensive alternative then the enterprise reporting tools on the market. Practically free since almost all the intended users already own Microsoft Office.

I believe even though it is different then the reference implementation it does for the most part adheres to the OERA and it's intended goals.

Appreciate any responses, be kind

All Replies

Posted by Admin on 06-Apr-2007 02:25

I think this approach of storing everything in a single entity works for a particular scenario, but not all. And when you have a complex entity, you can alsways delegate work internally to other classes, the caller doesn't have to know about that.

So besides your entity, there must be more class types in your archticture. Think about static data.

Posted by Admin on 06-Apr-2007 02:29

All the preTransaction, postTransaction etc.

validation hooks will also not be in the

implementation.

So what if you want to implement a user interface that checks the state of the data before its saved? So when you enter a product code during order entry, you want it to pick up defaults for the orderline. Where do you put this logic? When you enter a product, where do you validate that the product is a valid product (there can be some business rules associated that define if a product is valid in the order context)?

Posted by Admin on 06-Apr-2007 02:31

3. The security and context management will mostly be

moved to the interface layer. The BL will for the

most part not have it's own security or context

built-in.

Who will be responsible for filtering data:

- user XX is not allowed to see invoices of customer YYY

- user XX is not allowed to use all price agreements

Posted by Admin on 06-Apr-2007 02:34

4. Because of that a Service Adapter/Interface will

not be used in the implemented. BE's will be either

runned locally or remotely through a proxy making

dynamic calls to their direct full BE (similar to the

Dynamics implementation). BE's will be super

procedures.

This means that a business entity will get a huge responsibility. The idea of OO is to think in responsibilities. Try to do a CRC-session (http://www.csc.calpoly.edu/~dbutler/tutorials/winter96/crc_b/) to find out if your entity is doing too much...

Posted by Admin on 06-Apr-2007 02:39

2. There will only be BE's, DAO's will not be used in

the implementation. The BE will also be incharge of

mapping the internal view to the physical store and

hiding the physical implementation.

It could still delegate the actual data access to a dedicated data access component and that data access component could implement all queries required by the entity. The shift is that you will use one data access component instead of multiple. But you could do the latter if you want to, when you want to fetch code values for instance. This way you can potentially generate the data access components, so your business entities are slim(mer) by default...

You should also consider implementing a dual interface:

- a dataset/temp-table return value

- a query return value for reporting

This way you can use the same component for reporting or batch jobs as well.

Posted by Alon Blich on 06-Apr-2007 09:16

So what if you want to implement a user interface that

checks the state of the data before its saved? So when you enter a product code during order entry, you want it to pick up defaults for the orderline. Where do you put this logic? When you enter a product, where do you validate that the product is a valid product (there can be some business rules associated that define if a product is valid in the order context)?

I didn't say there isn't validation logic.

The common, dynamic method for save/commit changes with sets and sets of validation hooks is just too elaborate.

Instead a simple strait forward procedure can traverse the changes in the dataset and save/commit the changes to the physical store. All the validation logic needed for this operation will be apart of the procedure where ever needed.

I'm not saying validation logic can't be modularized into internal procedures or methods if it's being repeated. There can be stand alone validation methods that can be called from the UI or another BL object if needed. And there will be some very basic validation logic in the UI.

Posted by Alon Blich on 06-Apr-2007 09:33

Who will be responsible for filtering data:

- user XX is not allowed to see invoices of customer YYY

- user XX is not allowed to use all price agreements

Security will be in the interface is it always has. I'm not saying everything we used to do building applications was good but this works like a charm.

Who can enter which menus, screens, what functionality is off limits to whom etc. is implemented in the interface.

Of course, there can always be small exceptions to this rule but in general security is done at the entrance and not all over the place.

Context will be managed for stateless interfaces, like, web UI and webservices.

Why does the BL in the OERI but it's also hinted on with the OERA is almost a separate, standalone application with it's own separate security and context management ?

Posted by Alon Blich on 06-Apr-2007 09:35

This means that a business entity will get a huge responsibility. The idea of OO is to think in responsibilities. Try to do a CRC-session (http://www.csc.calpoly.edu/~dbutler/tutorials/winter96/crc_b/) to find out if your entity is doing too much...

If anything it will be simpler, much simpler and will only deal with, well, handling the data.

The BL would not be a separate application.

Posted by Alon Blich on 06-Apr-2007 09:48

It could still delegate the actual data access to a dedicated data access component and that data access component could implement all queries required by the entity. The shift is that you will use one data access component instead of multiple. But you could do the latter if you want to, when you want to fetch code values for instance. This way you can potentially generate the data access components, so your business entities are slim(mer) by default...

You should also consider implementing a dual interface:

- a dataset/temp-table return value

- a query return value for reporting

This way you can use the same component for reporting or batch jobs as well.

One can do many things but without mainly a real need for several physical stores and non-Progress database data sources it is a very expensive unneeded complication.

All the unneeded objects that would have to be written and all the attach/detach data sources, callbacks and on and on and on, that at the end of the day will not have a real use.

In regards to passing queries for reporting. 4GL query objects (the 4GL object) are so limited that there are just too many real world cases where they couldn't be used.

In general, I believe, BE can be used for the transaction processing part of the application, maintenance screens etc. and including basic reporting, generating forms etc.

Dedicated reporting and BI will be done using SQL. At the current situation 4GL/ABL is just not suited for real world reporting and BI needs (although it is very well suited for transaction processing).

Posted by Admin on 06-Apr-2007 11:27

You should also consider implementing a dual

interface:

- a dataset/temp-table return value

- a query return value for reporting

>...

>> In regards to passing queries for reporting. 4GL

>> query objects (the 4GL object) are so limited that

>> there are just too many real world cases where they

>> couldn't be used.

You can return a class instance representing a resultset, which wraps the query. The result would be a typed query with buffer properties and a Next()-method.

Posted by Admin on 06-Apr-2007 11:29

Security will be in the interface is it always has.

I don't hope you mean "user interface" here, do you? Else you're exposing your entities through the AppServer and anybody can use them without any security applied. That doesn't seem right.

Posted by Admin on 06-Apr-2007 11:31

Why does the BL in the OERI but it's also hinted on

with the OERA is almost a separate, standalone

application with it's own separate security and

context management ?

Because it's an aspect of the application, a feature. And you can plugin a different implementation, like LDAP authentication, table based authentication, etc.

Posted by Alon Blich on 06-Apr-2007 11:59

If one needs to plug in the business logic to another application why not do it through the integration layer ?

Posted by Alon Blich on 06-Apr-2007 12:14

I don't hope you mean "user interface" here, do you? Else you're exposing your entities through the AppServer and anybody can use them without any security applied. That doesn't seem right.

Yes. That's what I said.

The case where users have, possible, access to the AppServer would be GUI. In most cases, this would be on a LAN in a controlled, trusted environment or at least in our case.

It would mean that there would need to be some hacker that we don't know of inside this zone, with a 4GL development package that would somehow probe the BE's and make calls to the AppServer.

To me it seems like a seriously unnecessary, unproportional overhead to develop the BL as a separate application with it's own security, context management etc.

Besides if there is a hacker inside this zone and especially one that knows his way around Progress there's so many other things he can do, and even not in Progress.

Posted by Alon Blich on 06-Apr-2007 12:14

You can return a class instance representing a resultset, which wraps the query. The result would be a typed query with buffer properties and a Next()-method.

That is a very interesting idea !

Posted by Mike Ormerod on 06-Apr-2007 12:27

I don't hope you mean "user interface" here, do

you? Else you're exposing your entities through the

AppServer and anybody can use them without any

security applied. That doesn't seem right.

Yes. That's what I said.

The case where users have, possible, access to the

AppServer would be GUI. In most cases, this would be

on a LAN in a controlled, trusted environment or at

least in our case.

It would mean that there would need to be some hacker

that we don't know of inside this zone, with a 4GL

development package that would somehow probe the BE's

and make calls to the AppServer.

To me it seems like a seriously unnecessary,

unproportional overhead to develop the BL as a

separate application with it's own security, context

management etc.

But what happens when you have the stateless clients you mentioned earlier, Webservcies etc, where do you apply security then? If it's in a different place you now have 2 security components to maintain?

Posted by Alon Blich on 06-Apr-2007 12:33

I don't know, I haven't implemented a webservices interface.

But why not do the security and context management in the interface layer ?

The consumer logs in, possibly, a session context is saved with the user or other information and privileges.

Using the consumer login privileges the interface layer can decide on which features and functionality will be available to him.

Instead of using a login maybe some form of key can be passed for each request that will identify the consumer for the interface layer.

Posted by Thomas Mercer-Hursh on 06-Apr-2007 13:04

From what follows, I take it that

you are referring here to the component type, not the language

feature. Moreover, I am going to assume that you are referring

specifically to the component type as it is defined and used in

AutoEdge. You seem to have a bit of a fixation on that Myself, I

would like to step back a bit from the specifics of the AutoEdge

implementation and the particular way in which the objects are

partitioned there and ask ourselves what we need for a real world

implementation. To me, in that context the concept of a data source

component is to decouple the object assembly from the actual source

of the data. This seems like a worthy idea in and of itself, but I

think you are correct that it may have some problems with more

complex objects. But, let's consider the case of orders with

optional item and customer data as we have been discussing. There

is some core set of data which we expect to live in the Order

service, i.e., the stuff actually stored in the order, order-line,

and associated tables. Lets set aside for a moment whether there is

actually one data source component for all the tables in the Order

database or multiple table and focus on the item and customer data.

This is data that we don't always need because much of the time we

have all the information we need within the Order service itself.

Thus, if we are going to load this only some of the time, it seems

reasonable to structure it as obtaining a set corresponding to a

list. I.e., fetch all the order related data in the target set and

then ask the source of the additional item data for all items

contained in that set of orders. This seems to me something that

would be reasonably efficient and doesn't require that we do the

join to the item data during the load of the order data. In fact,

if the item data is remote, it would be far better to make one

request for a list of items than to issue N requests, one for each

line, many of which were possibly redundant, although we might

avoid that with caching. Even with caching, though, we would have I

separate requests for I items instead of a single request covering

all the items at once. In this context, I can see it being useful

to have a data source component for items so that one could use the

same component for assembly and use either the local or the remote

version of the item source depending on whether the data was local

or not. Now, let's return to the question of the data in the order

tables. On the surface, it would seem that "interesting" queries

requires that we have only one data source (or no data source)

components for all the connected order tables that we are going to

assemble. But, what happens if we take an approach similar to that

of the item data. E.g., suppose that we have a complex query such

as wanting all orders which contain a specific item that were

placed within a given date range. This sounds like exactly the sort

of thing that we would want a query optimizer to do for us. And, it

sounds like it would be inefficient to make the two queries

separately, i.e., all order headers in the range and all order

lines with those items and then having to fit them together and go

back and get all the other order lines for those headers. But,

suppose the queries were not in parallel. Suppose we got the order

headers, made a list, and then did a query to the order line

component for orders in that list which include that item? To be

most efficient, this does require some guessing about which query

is going to be the most restrictive and doing that first. But, it

does seem like it might be a way to handle some pretty hard

queries. One of the secondary virtues of the

OERA approach is the increased ability for programmers to be

specialists, e.g., UI designers, DA builders, BL writers, etc. So,

include a SQL person in the mix. Yes, a broader range of skills

required overall makes it tough on a one person shop, but that is

modern computing. I sure couldn't have satisfied my users with

Excel. But, I think before we can go very far in considering

whether or not to have data source components, we need to decide

what we are building --- are we building data only structures like

PDS or TT and passing those around the applicat

Posted by Thomas Mercer-Hursh on 06-Apr-2007 13:09

Because if the

security is implemented there, you can bolt on any one of a half

dozen different UIs and/or "enterprise service" requests and the

security will be enforced by a single common set of code. Otherwise

you have to implement the security in each one.

Posted by Thomas Mercer-Hursh on 06-Apr-2007 13:12

One only

writes the components one needs. If they are packaged well, then

when one needs a different flavor, e.g., moving data to a different

computer and needing a remote interface, then one only has to write

the missing component, not rewrite the ones that were created

before. I really think that there must be something about AutoEdge

specifically that has you spooked into thinking that all of this is

really, really complicated. I really believe that once we sort

through it, not only will it be very straightforward, but it will

actually be fairly easy to generate the whole data access layer and

you won't have to write a thing.

Posted by Alon Blich on 06-Apr-2007 13:21

Because if the security is implemented there, you can bolt on any one of a half dozen different UIs and/or "enterprise service" requests and the security will be enforced by a single common set of code. Otherwise you have to implement the security in each one.

That helped me understand.

But wouldn't you still need the same security features in the presentation layer. For example, which menus, screens, maybe controls etc. a user has access to ?

Another thing is that for the most part, I think, UI's like GUI, Web, webservices are not just another variation of the same.

They usually expose very different functionality and have different security and context needs.

Posted by Alon Blich on 06-Apr-2007 13:28

I think, that I will probably start with a design that I can get a handle on.

After the project I'll set some time for some serious soul searching and continue from there

Posted by Mike Ormerod on 06-Apr-2007 13:30

Sure, which is why if you look at the component diagram included in the quick ref guide (http://www.psdn.com/library/entry.jspa?externalID=2123&categoryID=54) you will see that it shows 'Common Infrastructure' not only in the server (or Service Provider), but also in the client (or Service Requester). So you specific client security model would be implemented within the CI layer of the client.

Posted by Thomas Mercer-Hursh on 06-Apr-2007 13:31

No, I see no reason to implement security in the UI. Remember the WUI example where the connection is stateless. You have to feed it only valid and authorized data.

I think the security rules are universal and a matter of policy. The way those rules are presented may vary according to the UI.

Remember too that it isn't just UI. You are also exposing services for use by other applications and those have to obey the same security rules. If you publish a service that will dump the contents of the financial statements, you can't prevent a request coming in for that service, but you can sure decide whether it is going to someone authorized to see it!

Posted by Mike Ormerod on 06-Apr-2007 13:32

The component diagram I just mentioned is quite straighforwards, and if you stripped it down to the bare essentials, and didn't for example break the presentation layer into MVC or MVP, you can have very few component types.

Posted by Admin on 06-Apr-2007 18:56

I'm not convinced on the need for a be and a da. I also find confusion confusion and inconsistency on what I find in the be and da in autoedge. For example attaching the datasource (a topic of this discussion) in be for saveChanges and in the da for fetchWhere. It is certainly recognised as in the be the run of addDataSource is done in da. This does not avoid the confusion in the developer's head - now where is the datasource attached this time? I don't really like having to point to another procedure to attach the datasource either, but that's another topic.

In my framework I can have a generic fetch and save procedure, but even the specific procedures have a small footprint - uncompiled 2K and compiled 9K - most of the code is supered and you override if you want something different. Now that's with out your validation and other business logic. So when a procedure starts you can flag what other procedure runs (if any) and whether it is persistent. So you can decide if you have a be and da and whether it is persistent. So for example you can have a beXXX.p beXXXpersist.p daXXX.p and daXXXpersist.p or you can just have beXXX.p if it's a small procedure and and you want no da. This can be decided on an object by object basis (althought you may like consistency) and the naming is not fixed - so you can have XXXbe.p and have variations in the call. As a framework this allows the developers to choose if they what business and data access separated and also allows you to super procedures that have a lot of code and/or may be run frequently. Also the datasource connection and procedure names and persistency is stored which means it can be changed as needs change without changing code and can be site specific.

Posted by Thomas Mercer-Hursh on 06-Apr-2007 19:16

Well,

certainly one of our problems here is vocabulary. Let me propose

some nomenclature so that we have a chance of keeping straight who

is advocating what. In the AE DOH/ROH model, we have BEL - an

entity component that lives in the BL layer, implements the BE

logic, and contains a dataset for the BE data. DAA - a data access

"assembly" component which builds the dataset from the data

obtained from the sources DS - a data source component, which gets

data, but doesn't build it into sets BEPDS - a PDS containing the

BE data which is moved to and from the DAA and BEL. In the "true"

OO model some of us have been discussing we would have: BEO - an

object encapsulating both the data and logic of a BE DAO - an

object responsible for obtaining and persisting BEOs DSO - a

possible, but not universally agreed object, which would isolate

the DAO from the actual source of the data. Now, it doesn't seem to

me in either of these cases that one wants to do away with the

distinction between BEL and DAA or BEO and DAO because the former

of each is a component which either lives in (BEL) or is used in

(BEO) the BL layer while the DA* components are in the DA layer. If

you make a BE component able to in essence instantiate itself from

the database, you have seriously broken the principle of layering.

Or, did you mean something else? I don't dispute some confusion and

"uncleanliness" in the current AE implementation ... but I think

what we are trying to figure out here is what it should be, not

what it is. We know it isn't the way we want it now. It may be we

have to fork the discussion between SP and OO approaches, although

I think that what constitutes good structure should apply in both

places. I just think it is a lot cleaner in the OO version.

Posted by Admin on 06-Apr-2007 23:15

As I have mentioned in other discussion, we have been thrown in the deep end in using an architecture that we are not familiar with nor have had any instructions/guidance in. As I'm usually at the forefront I often have to substantiate certain practices. In the case of the be and da layering, I'm called upon to justify the need for such layering and provide advice on what goes into what layer.

I think the reason why I raise the inconsistencies is what is being targeted here. That is until we have properly defined the reasons for the layers and what goes into each layer it's difficult to justify the layering.

As I've started into the object approach I can see the need to ensure that all to do with an object is done within an object. For example, I my framework I have tables that are common to other systems e.g. users, menus, system parameters/controls etc. Now for these tables I have ensure that the users of my framework can use their own tables instead of my framework tables. However to achieve that all access to say users must be done through the user object. Once I can see a real need I can justify and are much more committed to the approach.

While eliminating all direct data access from the be sounds nice, what advantage to I get by that. Most developers are not planning to suddenly swap to another database and even if they were they I'd think that there be a lot more work than just separating the data access. Do all scope transaction like Progress and have word indices?

Most developers I come across can see the extra effort in splitting the data access but none so far have come up with good reasons to layer the code.

Posted by Alon Blich on 07-Apr-2007 05:42

In regards to BE and DAO's.

The architecture does separate between the internal view and the physical store.

And, for example, changes in the database schema will not effect the rest of the application and they will only have to be dealt with in the effected BE/s, similiar to the DAO's in the reference implementation.

BUT if we will NEVER have multiple DAO's, one for ORACLE, one for SQL Server or even for flat files, there will always be one DAO for every BE why not already do the mapping in the BE ?

IMO, all these DATA-SOURCE (I mean the language element) attach, dettach, callbacks, all those validation hooks, elaborate dynamic implementations etc. is an incredible unneeded complication and in some cases it can't even do the job.

I think, its a nice theory but I don't write theories I write applications. I feel that things are only getting more and more complicated this way not coming together. Usually when I write code and things are only getting more and more complicated it usually means I'm going in the wrong way.

IMHO, after the whole ADM saga that went on for over a decade Progress should have sent the team to a well deserved vacation and then went into some serious soul searching. And what or one of the things that should have come out of it is that they need to write code that people will use.

Posted by Alon Blich on 07-Apr-2007 06:40

we have been thrown in the deep end in using an architecture that we are not familiar with nor have had any instructions/guidance in

I certainly share that feeling.

After hearing about OERA, seeing samples and reference implementations, attending the web events, reading some of the articles and documentations for, I think, the past two years and it still doesn't make sense and I don't understand all of it, not sure alot of people do and not only that but apparently it still has some way to go.

Now that I will be in charge of designing an architecture in a few month time I'm certainly not going to be following it. And I believe, it is going to lead to quite a few bitter, otherwise excellent, developers, failed projects and lost jobs who were lead this way. That's what I believe. It's not really hard to except after all it's exactly what happened with ADM.

Keep it simple.

Posted by Thomas Mercer-Hursh on 07-Apr-2007 13:21

One advantage,

of course, is that you insulate the form used by the application

from the stored form, allowing you to change the stored form as

needed. A classic example is moving from 5 digit zip to 9 digit

zip. As long as you provide a new method for the 9 digit form, all

of the parts of the application that need and expect only 5 digits

can continue to do so without change, but the one place that needs

the 9 can use the new method. While many people don't start off

intending to switch databases, the need does arise. There are many

Progress APs who have done Oracle and/or SQL Server ports of their

applications because of market demand, even though they knew that

what they already had was actually superior. Think how much easier

that could have been if the data access logic was all encapsulated

and they knew that they didn't need to pay attention to 70% of the

code at all. While it may be extra effort to take an existing

unlayered application and layer it, creating a layered application

is not more work, it is less.

Posted by Thomas Mercer-Hursh on 07-Apr-2007 13:25

So, for the moment, back

away from the implementation and focus on the concept. We can

decide later about the implementation. Anything one can do with

ProDataSets one can do with "naked" ABL, one just doesn't get any

default behavior for free. Personally, I think that much of the

trouble with PDS has been a combination of a maturing technology,

i.e., trying it out when it wasn't as mature as it is now, and of

not being really clear up front what the behavior should be. It

seems likely that PDS can add value, but let's figure that out when

we are sure what we want to do.

Posted by Admin on 07-Apr-2007 17:57

Unfortunately, I think that in this case some rough edges in the AE model have resulted in you questioning the whole OERA principles rather than focusing on how to remove the rough edges.

In my part that hasn't been the case. I had developed my framework on the earlier examples by John Sadd. I have recently upgrade my framework as I saw advantages in some of the features in autoedge.

The example I gave of allowing a da or other procedure to be run persistent or not was an enhancement. So my framework provides for layering but it is not restricted to the number levels, persistence, nor naming.

One thing I did that I didn't mention was that I supered the started or found procedure, so that I don't need to run in a da handle. Apart from the layering this allows me to persist large and/or frequently run procedures and but also appear as if it is the one procedure. If also allows me to overlay procedures.

Generally I adopt much of oera and autoedge. I using it have found problems and limitations that I have had to overcome (or flag it to be addressed in the future). I have also found limitation in the ABL such as getting the save current-change fields and values that I mentioned earlier.

A classic example is moving from 5 digit zip to 9 digit zip.

This does mean much to me here in oz, but I can guess. But maybe my guess is wrong because I can't see how a be/da layer helps with that. In a different environment you would have to run a whole different da. In my case I can overlay part of another procedure (by altering a table) where necessary and not carry the extra complexity of another layer in case I may need it.

Most of my clients develop for their own needs, however I have seen takeovers where this may be an advantage but in many instances the they either continue with two applications or one is phased out. It's hard to justify the costs of layering in case of takeovers.

For any AP this may be a different matter. However, as I raised in another post I wonder if the layering has been proved and what problems were encountered and was the layering any real advantage.

I like to hear of more examples of the benefits of layering.

Posted by Thomas Mercer-Hursh on 07-Apr-2007 18:38

Interesting... did you see the recent thread on PEG

in which Tom Bascom was expressing his belief that almost no one

did that? I'm not sure what the significant of a takeover is unless

you are thinking that the acquiring company many want to use a

different database. Where I have heard the most about database

shifting is from APs who discover that there is a strong preference

or requirement in their target market for Oracle or SQL Server as a

corporate standard and one just doesn't sell them an application

using anything else. I'm not sure that we are going to be able to

come up with a list of advantages to layering if the things we have

said already haven't had any meaning. In a way, it is a

philosophical issue, not unlike believing that one of the virtues

of OO is strong encapsulation and compile time checking of

signatures. If your reaction to that is "so what", then I guess

that's your reaction. But, I think it has been well demonstrated

that it leads to increased code re-use, greater consistency, easy

of maintenance, ability of programmers to specialize on component

types and become more proficient, overall lowered cost of

development, good use of established patterns, etc. Let me turn it

around ... what are the advantages of not working this way?

Posted by Admin on 07-Apr-2007 19:28

That seemed to be the apparent conclusion

The da/be split is just one area that I have trouble with as there has been no clear justification.

Merely running super after super doesn't constitute layering

No it doesn't in itself, but my framework allows you to layer, to just separate into logical units that can be reused, or to just have one layer.

where some particular component actually lives

You can still run in a handle if you wish. At the moment I have to option to persist a procedure - I could also add one to super it.

no potential for compile time checking that signatures match

Once it's in another procedure you can't do compile checks? Unless they are functions and you have a predefines .i and I wouldn't have thought you'd put a da .i in a be - at least I haven't see it.

getZip9()

But get getZip() and getZip9() can be in the be. I still don't understand the reason for the be/da split for this.

Most of my clients develop for their own needs

Most of my clients and other consultant clients that I know are government or in the manufacturing arena. Products such as creditors, payroll, and general ledger are generic but in their own particular area of expertise they pay to have their own application software. Event though there may be similar organization, they run differently and are also protective of their own methods and software. So as consultants we have to be careful moving from client to client that we don't take another client's software.

Let me turn it around ... what are the advantages of not working this way?

I discuss this with other developers that I'm associated with, and the general feeling is that they can't see why they can't check directly when validating rather than having to call a procedure/function in the da where a database access is required. I'm a little short on time this Easter Sunday morning, but by splitting extra coding and procedures/functions are required and developers are asking why is this extra coding, complexity, and documentation necessary.

Miles

Posted by Admin on 08-Apr-2007 01:11

I was thinking about one aspect while replying and have tested it since. If I had a BE, BEsuper, DA, DAsuper and each one supered the one above and remembering that in reality we want BEsuper recognized as BE, DAsuper as DA, but we also want anything in BE to call anything in DA - you can't run invoke something from BE to DAsuper. I can make them session supers but don't think that's appropriate.

Supering BEsuper to BE and DAsuper to DA and running any DA in a handle from BE would be better. Unless there is another way that it can be done easily and generically.

Miles

Posted by Admin on 08-Apr-2007 03:28

Let me turn it around ... what are the advantages of not working this way?

Just remembered another concern expressed by a developer over layers was the extra overhead it carried. Systems are fast these days and speed is not so much a concern. However, at times it does become important such as when we are competeing against other machines such as in a production line. In addition to the effect on speed there's the added complexity such as when trying to debug problem code because of speed or otherwise.

Miles

Posted by Thomas Mercer-Hursh on 08-Apr-2007 12:23

And, one of the points is that it is

not extra coding; it is less coding. Right now, it seems complex

because it is unfamiliar, but in reality, once one becomes used to

it it is actually simpler because roles are more defined and

encapsulated. And, one doesn't end up writing the same validation

code over and over again.

Posted by Thomas Mercer-Hursh on 08-Apr-2007 12:24

Well, of course I wouldn't use supers at all, but would do it with objects.

Posted by Thomas Mercer-Hursh on 08-Apr-2007 12:32

Speed can be a concern in modern architectures, especially when one starts distributing them around networks. The key is in designing your coupling correctly. If one is running across the network constantly, then sure, one is likely to have performance issues. But, there are techniques to get around that.

If you have a pile of supers and especially if aspects of that pile are dynamic, then I can see where you run into additional debugging complexity. Take the same logic and encapsulate it in object and the complexity goes way down because you have isolatable units which can be separately tested.

Posted by Admin on 08-Apr-2007 20:10

AE implementation to be a real BE

I don't really understand what an AE is.

just as mushy with objects

While I might have certain views I'm providing flexibility so that the user developer can decide how he wants to work.

Myself, I'd like to see all these .is go away

Here, Here

locking yourself into a deployment architecture

Current this is on an appserver with db connection where I expect to be. This goes to be basic question on the BE/DA split. A few year ago we were told to split of the client from the db access because of appserver i.e. there was a reason. With the BE and DA we are on an appserver. Is there a potential that the BE move to the client or elsewhere?

validate against a local cache

As above we could very well be on a client. Also a cache means populating and keeping refreshed. Go go back to the rereadnolock. If we only cache as required, then why cache?

not extra coding; it is less coding

In our case it hasn't been the case and that's why we need to understand it better. It does happen with object calls but not with the basic stuff like validation. All you seem to do is make calls to an object da to get a cache.

At the moment each BE and DA are coupled. The way I have reduced coding is to have super functions that are available to any object.

Posted by Admin on 09-Apr-2007 05:24

I was thinking about one aspect while replying and

have tested it since. If I had a BE, BEsuper, DA,

DAsuper and each one supered the one above and

remembering that in reality we want BEsuper

recognized as BE, DAsuper as DA, but we also want

anything in BE to call anything in DA - you can't run

invoke something from BE to DAsuper. I can make them

session supers but don't think that's appropriate.

A better "super" is the class based approach. Why? Well:

- it offers you compile time support

- it gives you a clear picture where things run

- you can define interfaces

- you have single inheritence, so you know where in the stack something is implemented.

So even if you don't want to OERA, you still might want to consider class based programming, since it will make your application more robust...

Posted by Admin on 09-Apr-2007 05:32

There are many

Progress APs who have done Oracle and/or SQL Server

ports of their applications because of market demand,

How do you know this, do you have any figures backing this up? Do you know if they succesfully managed to support Oracle/Sql Server? Do you know if they are able to successfully deploy their single application on multiple database targets? What was their application architecture, was it really a layered application or did they just port small parts of the code and left the majority of the code as is?

While it may be extra effort to take an existing

unlayered application and layer it, creating a

layered application is not more work, it is less.

And it would be even less work if the ABL would be more declarative about this.

Posted by Admin on 09-Apr-2007 05:41

A

classic example is moving from 5 digit zip to 9 digit

zip. As long as you provide a new method for the 9

digit form, all of the parts of the application that

need and expect only 5 digits can continue to do so

without change, but the one place that needs the 9

can use the new method.

I really doubt that this kind of change will be the bottleneck in your application release schedule Changing the back end will most of the time be required by the introduction of a new feature. And new features requires changes.

I can see the other way around though, when you originally stored something in a normalized table structure, but later on you decide it's better to store an XML-blob or vice versa. This can be done in the DA without affecting the BE, unless some other BE aggregates this data directly.

In general I think you want to isolate the data access in a layer, so you can test and specialize that layer. You will get a clean separation of concerns. It's like a car: we rely on a gas station to provide us fuel, the car doesn't come with a nuclear powered engine as a total package. This way the gas stations can specialize themselves and cars can be lean and mean.

Posted by Admin on 09-Apr-2007 06:02

A better "super" is the class based approach

It possibly is but I'm unfamiliar with a OO approach. Apart from learning, the reason for my questions are for my framework and for a client that has taken the autoedge approach.

From my framework point of view - Firstly I have to learn and practice OO. Secondly, I'm not aware any OO developers, so who would use my framework - any potential users would also have to be convinced on the OO track and learn themselves.

I'd have the same problems with my client. I couldn't convince them to trail and test autoedge.

Having said that I'll be interested in seeing what Thomas comes up with.

Miles

Posted by Thomas Mercer-Hursh on 09-Apr-2007 11:08

AE = AutoEdge

Which is certainly not getting you ready for OO.

Posted by Thomas Mercer-Hursh on 09-Apr-2007 11:12

Figures, no, but I do know of examples and yes they

regularly deliver on alternate databases as a fully supported

solution. But, no, I doubt that many, if any, achieved this through

layered code, but rather did it the hard way, converting what was

necessary.

Posted by Thomas Mercer-Hursh on 09-Apr-2007 11:14

While there has been a limited amount of stuff come out in OO thus far, my rumor mill suggests that there are some substantial projects under way.

Posted by Admin on 09-Apr-2007 16:52

it certainly means managing the cache

I wasn't sure if you meant a real cache of a temporary copy for the current purpose. While I'm not as concern in using a client cache as I know the server code will keep things in check. Also the client appserver is up 24*7 and even fairly fixed codes change. You'd have to have some sort of messaging service to advise a refresh. It's reasonably acceptable to advise you client users to re-login or provide a refresh option - but even that is only reasonable for tables that rarely change.

At the client the tables that we are currently using don't refer to many master tables. Most apart from system control records are to tables that you couldn't cache i.e. transaction tables.

Access to system control is via a "DA" super i.e. some supers only do non-db work such as validating phone numbers and emails, while others are database lookups.

Taking the system control, it's virtually referenced in every server procedure to avoid hard coded references. The system control could be an object on it's own but using the AE approach that means all the data-source, queries, fill's etc. As we group this procedures together into a "DA" super I can't see the benefits of objectifying tables such as system controls.

I accept that keeping an objects code together in an object can reduce code, be efficient, and make for better understanding. But, should there are exceptions such as in control records and validations that are used extensively in many tables.

I don't like the cache approach.

Miles

Posted by Thomas Mercer-Hursh on 09-Apr-2007 17:29

Caching is a complex subject and one we have talked about some in other threads here on PSDN. There is certainly no one approach that works for all tables or objects within an application and different sites and architectures will have their own implications for best possible strategies.

One of the first big realizations, for me at least, is the recognition that a distributed architecture pretty much requires thinking in terms of caching in some way. This can be a tough hurdle for people who are used to the idea of validating everything directly to the database, which seems so delightfully absolute, but if you think about it, any form of optimistic locking is, in effect, doing updates against a cached version of the record with the hope that the updates will still be valid by the time they are committed. Seems a bit like playing roulette, but the fact of the matter is that it works quite well and has scalability potential which one just can't achieve without it.

Back in the mid-90s, when I was first getting serious about distributed architectures, I was quite taken by a structure they used in Forté code where a component would get a copy of a record or a table from the source and then would register for a table or record changed event. This meant that a client could happily go on using a local copy of the tax codes, for example, knowing that, if a change was posted, the source object would notify everyone to refresh. The beauty is that there is no traffic at all unless there is a change.

I now think that this might be a bit more anal retentive than it needs to be, but I think it needs some experimentation in the content of SOA/ESB. One variation might be to establish time to live parameters like are used with DNS entries so that some tables were automatically refreshed once a day, others once an hour, and so on. If these policies are known by the people who change the codes, then they can simply say, "that code will be ready for use in an hour". I don't see anything wrong with that.

One of the other topics we talked about on another thread is the notion of a shared cache. E.g., suppose I am working on the Order service which is on its own server. One of the things I need on a regular basis is Item data and lets suppose that is in the Inventory service on a different server. Now, one of the things I can do is to create a local cache of that Item data on the Order service and create a process which keeps it refreshed. Then, all the processes on the Order service machine can use that data as if it is local. This might include disciplines such as, when I post an update, such as committing stock to an order, the confirmation message includes the new values of the updated fields and these are used to update the cache. One could also use a product like DataXtend to keep this data current ... once they release DataXtend for Progress databases, that is, but I expect that to be soon.

Depending on context, one can cache transaction tables, btw. For example, if I am going through a series of processing step moving an order through from order taking to shipping, I might hit a series of point where there is a complete transaction and I would then update any dependent associations, but there is no reason for me to go read a fresh copy of the order I already have.

On the contrary, the more it is used, the more sure I am that I only want one copy.

Posted by Admin on 09-Apr-2007 21:41

Have you looked at http://www.oehive.org/PseudoSingleton ?

I did glance at it but as I'm not familiar with OO didn't get into it. Do you indicate why a class is better than a super?

the more sure I am that I only want one copy

With a super there is only 1 copy. This is true for all the standard validation outside an object whether that be system control that have a db access or phone contact, email validation etc.

If we put it in a object at the moment there could be more than one as we are not getting objects to the field/column level.

Miles

Posted by Thomas Mercer-Hursh on 09-Apr-2007 22:37

Object vs Super is perhaps a topic on which we might start a fresh thread in the OO forum, but to me the big distinction is encapsulation and the run time checking. With the technique I described, you end up with a reference in each place the procedure is used to NEWing the object. This gives you the compile-link checking and the analytical link which is missing with supers where you mostly just need to know that this is where the call resolves ... although check out John Green's CallGraph for helping to give you a clue.

At some level, one can say that is six of one and a half dozen of the other, but the object approach is ultimately far cleaner and more traceable and in time that will pay significant maintenance dividends.

Posted by Admin on 10-Apr-2007 05:01

From my framework point of view - Firstly I have to

learn and practice OO. Secondly, I'm not aware any

OO developers, so who would use my framework - any

potential users would also have to be convinced on

the OO track and learn themselves.

That sounds like a reasonable motivation. On the otherhand how do those developers figure out what's going on with all those super-procedures that are loaded transparantly? There is no compile time glue when you look at the code. The advantage of a class based approach is that it's easier to see the call stack and the inheritence tree (which methods are overridden). You don't have to worry about super-procedures. I think you can benefit from these features in your framework even if you don't want to expose the actual classes to your users. And using an object model is easier than you think: haven't you ever used a "foreign object model", something like Outlook, Word, Excel, or any other COM-component?

Posted by Tim Kuehn on 10-Apr-2007 08:00

That sounds like a reasonable motivation. On the

other hand how do those developers figure out what's

going on with all those super-procedures that are

loaded transparantly? There is no compile time glue

when you look at the code.

That depends on the way the super-procedure structure's set up. The approach I used with my procedure manager makes it quite evident which SPs and PPs are used by a given module, and it also enables compile-time checking of function parameters and signatures. (Procedures aren't done because the ABL compiler doesn't check procedure signatures.)

The advantage of a class

based approach is that it's easier to see the call

stack and the inheritence tree (which methods are

overridden).

On the other hand, there will be times when you don't want methods overriden, and figuring out what lower-class method is available at deeper levels of inheritance can be real pain.

You don't have to worry about

super-procedures.

With the procedure manager, you don't have to worry about SPs OR PPs.

I think you can benefit from these

features in your framework even if you don't want to

expose the actual classes to your users. And using an

object model is easier than you think: haven't you

ever used a "foreign object model", something like

Outlook, Word, Excel, or any other COM-component?

Using a "object model" in a procedure-managed application is quite easy as well.

Posted by Thomas Mercer-Hursh on 10-Apr-2007 13:53

Which is the reason for the FINAL keyword.

Posted by Admin on 10-Apr-2007 16:44

On the otherhand how do those developers figure out what's going on with all those super-procedures that are loaded transparantly?

In my framework there's 1 client and 2 server supers. I learn't from adm2 not to have a million supers and secondly prodataset do some of the work for you. Have said that my framework V9 code was much the same. The other supers would be created by the user developers if they want to work that way. I'm just supering the DA procedure to avoid having to keep the handle to it and use it. Because of the run problem I was raising I may have to review that.

Yes I have done some complicated stuff in reading and writing to all of those "foreign object models" and that doesn't encourage me.

My concern is the lack of OO developers and as such your comment on the supers above applies to the OO code.

Miles

Posted by Thomas Mercer-Hursh on 10-Apr-2007 16:57

This sound like "we

haven't been doing OO, so we shouldn't start doing OO". Certainly,

any young whipersnapper with a university degree will have done

some OO and so will a lot of other people who have been doing Java

and .NET frontends. OO is no different than many other things that

have changed in the ABL world over the years. A little mentoring, a

bit of paying attention to forums, a bit of reading, and the next

thing you know what to do ... at least if you listen to the right

people.

Posted by Admin on 10-Apr-2007 17:13

any young whipersnapper

Unfortunately most I know are old fogies like me.

Posted by Thomas Mercer-Hursh on 10-Apr-2007 17:20

Yeah, well who's leading the bandwagon for OO here? And I started doing paid development in 1966 ... when I was 20 ... so old fogies can OO too. OOOF?

This thread is closed