Your view of development: The journey of separation

Posted by jmls on 18-Mar-2010 05:52

This post has been created in response to the massive ongoing debate about OO performance, and some of the discussions within regarding M-S-E and PABLO etc etc.

I'm trying to throw away 20 years of thinking, some old some new. I want to pretend that I am a brand new ABL developer (mysteriously however, I know the language inside out, ABL, OO and .net) , and am asking for help and advice on this theoretical journey. I also going to be playing devil's advocate a lot of the time, but not actually say when Hopefully this will lead to a reasoned debate where new developers can understand the reasons for doing what we are doing, not just blindly following the latest trend of the year.

I have a requirement for a report that needs to run on a  scheduled basis, and ad-hoc as and when needed. This report has 2 options

1) Customer group to use. Customer Group is a table containing  a GUID and name. The CustomerGroupLink table holds the group guid and customer guid, which leads to the customer table

2) Date range to use (a char variable that is used to calculate a start / end date range. This could be "Yesterday" "Last week" "Last Quarter" etc

Basically the report is meant to look through all sales of items to these customers within the date range, and aggregate the figures together , summarising by customer and then totalling at the customer group.

Old way

======

15 years ago, we would create a window/dialog, put in a browser with a crud toolbar, and add a "run" button. The report itself would be an internal procedure, using the screen-values of the variables, or *gasp* parameters (v6, anyone ? :).

10 years ago, we would split the report from the UI, so that the report could be run without the UI

5 years ago, we would separate out the DB from the UI. and use temp-tables on the UI side, passing the TT back and forth

0 years ago I am really confused

We seem to be heading down the "let's create as many separate objects  as possible in order to confuse the hell out of the elder generation, and make it impossible for the new to get to grips"

From what I can gather, we need UI, BL, and data layers. Oh, and all sorts of objects to deal with these layers. So, 15 years ago we had a single program, now we need at least 5 or 6

#1) do I create separate BL modules for the Group, link and customer tables ?

#2) do I then need to create separate modules to read this data and present it to the UI

#3) do I then need to pass this data onto the UI, regardless of UI

#4) when trying to run this report from a scheduler, do I need to create an additional helper, as you cannot run objects directly from startup

#5) IOW, how would you approach this ?

let's remember - trying to do a simple thing here. Not send a manned spacecraft  to mars.

Julian

All Replies

Posted by Tim Kuehn on 18-Mar-2010 07:06

jmls wrote:

15 years ago, we would create a window/dialog, put in a browser with a crud toolbar, and add a "run" button. The report itself would be an internal procedure, using the screen-values of the variables, or *gasp* parameters (v6, anyone ? :).

10 years ago, we would split the report from the UI, so that the report could be run without the UI

5 years ago, we would separate out the DB from the UI. and use temp-tables on the UI side, passing the TT back and forth

0 years ago I am really confused

We seem to be heading down the "let's create as many separate objects  as possible in order to confuse the hell out of the elder generation, and make it impossible for the new to get to grips"

From what I can gather, we need UI, BL, and data layers. Oh, and all sorts of objects to deal with these layers. So, 15 years ago we had a single program, now we need at least 5 or 6

Can I add an "amen"? I can understand the theoretical basis for object-izing every period, quote, and exclamation mark. I'm wondering what going to that level of detail buys us, particularly from a performance, developer productivity, and ongoing maintenance standpoint.

Posted by ChUIMonster on 18-Mar-2010 07:38

Since you have so much spare time on your hands perhaps you could code up examples of your 15, 10 and 5 years ago approaches? 

Then we could let the advocates of POOP try to convince us that their various approaches bring some sort of benefit to the table.

Posted by Admin on 18-Mar-2010 07:44

advocates of POOP

Progress object oriented programming?

Posted by jmls on 18-Mar-2010 07:44

Heh, who said I had too much time ? Part of this was actually the lack

of time - it would take me a couple of minutes to know up a "15 years

ago" screen, and a few hours to do the "0 years ago" plumbing, and I'm

trying to get to grips on what it actually saves me.

Julian

Posted by ChUIMonster on 18-Mar-2010 07:53

Posted by ChUIMonster on 18-Mar-2010 07:56

Exactly.  Over in that other thread there is lots of noise about "show me the code" (but precious little actual code).  As you just said an old-style example could be put together in short order.  So since you have specified a nice set of objectives and you're interested in how to bridge the gap (and whether or not it is worth it) it only seems fair to establish the baseline by showing your code first.  Don't worry we will be laughing with you, not at you

Posted by ChUIMonster on 18-Mar-2010 08:00

GUID, SchmooID.  We didn't have no fancy data types and functions back in the day; we made do with integers and liked it!

Posted by rbf on 18-Mar-2010 08:26

15 years ago, we would create a window/dialog, put in a browser with a crud toolbar, and add a "run" button. The report itself would be an internal procedure, using the screen-values of the variables, or *gasp* parameters (v6, anyone ? :).

10 years ago, we would split the report from the UI, so that the report could be run without the UI

5 years ago, we would separate out the DB from the UI. and use temp-tables on the UI side, passing the TT back and forth


Is AppServer somewhere in your equation?

Posted by jmls on 18-Mar-2010 08:39

yeah, the 10 year one - the report can be run on appserver , generate

report on appserver.

5 year one , report can be run on appserver , pass report back to

client as temp-table

Posted by rbf on 18-Mar-2010 08:42

OK IMHO it would indeed be a very interesting excercise to see at least the 5 year old code.

Posted by jmls on 18-Mar-2010 09:26

Ok, added the 15 and 10 year things. This is where it's got interesting - I'm contemplating what I need to do in order to write the 5 year thing, and now I've got to really sit down and write a *lot* more code . And that's before I start on the 0 year stuff.

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/10YearReport.w.zip:550:0]

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/10YearReportProc.p.zip:550:0]

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/10YearUpdateReport.w.zip:550:0]

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/15YearReport.w.zip:550:0]

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/15YearUpdateReport.w.zip:550:0]

[View:~/cfs-file.ashx/__key/communityserver-discussions-components-files/19/report.df.zip:550:0]

Bear in mind that the 10year stuff could be altered for the appserver easily enough.

This is run on the sports database, with the reports table added in

Posted by Thomas Mercer-Hursh on 18-Mar-2010 11:43

Julian, interesting that you should select an example which is basically a static image on the data aka a report as you called it, although one of the views you specify is a browser.  I say "interesting" becuase there are some significant ways in which this type of example is unlikely to illustrate the issues which you are querying.

First, as an aside, I should note that I have long been a fan of using third party reporting tools.  So, if one goes that route, the ABL involved becomes pretty minimal.  One needs a client-side screen for collecting the parameters, a server-side service for submitting the report for execution, and possibly a client-side function to obtain the results and display them on the client.  That ends up being two client-side pieces and a server-side service which is in that infrastructure box along the side of the BL and DL layers.  The only other server-side piece, if anything, is a little snippet of code in the interface layer for making the connections to the service.  BTW, the compelling reason for my thinking in terms of third party tools like Actuate is that the control over the appearance of the output is extremely good and in the case of Actuate specifically, the added value of the repository is considerable.

But, let's suppose that one is going to do this entirely in the ABL.  Again, it is in some ways not very illustrative of OERA layering since, as you have described it, there isn't really any server-side BL, unless one is formatting the data for printing instead of merely passing it along to the client.  Really, all you need is a DL component to fetch the data and aggegate a result table .... just exactly what you would have done if you hadn't been thinking of any layering whatsoever.  So, you end up with the same parameter screen in the client which talks to an interface component on the server which creates a BL component which asks a DL component for the data.  Then the BL component either passes the data back to the interface or formats the report.  Back at the client, the presentation component becomes a little more complex because one has a client-side DL to receive the data from the server, possibly a BL component to handle format, scrolling, whatever, and a presentation componet to actually draw things on the screen.  That last bit is MVC or MVP structured.

And, you can do it with or without OO.

Yes, that is more components that the legacy operation of having client-code reading the DB, but once you make the move to AppServer the overall complexity of the code doesn't change much and it is all running in the same location.  One breaks the same code up into smaller pieces to simplify maintenance and promote possible reuse, e.g., the DL component might be capable of serving a number of reports and the interface components also might be made general purpose.

One might wonder why the question is on the GUI for .NET forum since it is basically an architecture question.

Posted by jmls on 18-Mar-2010 11:56

tamhas wrote:

Julian, interesting that you should select an example which is basically a static image on the data aka a report as you called it, although one of the views you specify is a browser.  I say "interesting" becuase there are some significant ways in which this type of example is unlikely to illustrate the issues which you are querying.

First point was that this is in itself a simplistic view - in order to minimise the "guff" that could obscure. So, the "browser" is to select the type of report (select report parameters) which is read from the db. What would I need here for OERA style compliance ?

First, as an aside, I should note that I have long been a fan of using third party reporting tools.

Again true. But, again, this is the most simplistic way of showing what I was trying to get at. If I am looping through order lines to count them, or performing some business logic on the order lines (for a stupid example, every 3rd orderline is not counted) the principle is the same. Where does the BL sit ? In it's own layer ?

But, let's suppose that one is going to do this entirely in the ABL.  Again, it is in some ways not very illustrative of OERA layering since, as you have described it, there isn't really any server-side BL, unless one is formatting the data for printing instead of merely passing it along to the client.  Really, all you need is a DL component to fetch the data and aggegate a result table .... just exactly what you would have done if you hadn't been thinking of any layering whatsoever.  So, you end up with the same parameter screen in the client which talks to an interface component on the server which creates a BL component which asks a DL component for the data.  Then the BL component either passes the data back to the interface or formats the report.  Back at the client, the presentation component becomes a little more complex because one has a client-side DL to receive the data from the server, possibly a BL component to handle format, scrolling, whatever, and a presentation componet to actually draw things on the screen.  That last bit is MVC or MVP structured.

There is almost as much words in the above sentence as there is code in the entire 10year program example

What I would like to see from the OERA /BL / DL / MVC / MVP adovcates is (assuming that there is some business logic in 10YearReportProc.p) is what you would need to do in order to create the same results using OERA and BL and DL etc etc

One might wonder why the question is on the GUI for .NET forum since it is basically an architecture question.

It's in .Net because that's where I started thinking about this excercise - I want a .net, RIA and ABL frontend. What I was debating was the ROI on doing the OERA thing, as my time is precious

Posted by Mike Ormerod on 18-Mar-2010 12:16

OK, as the person responsible for the OERA, and therefore it's current custodian, let me clarify at least one point.

At the end of the day a layered architecture is about separation of concern and about choosing the right level separation for your individual circumstance.  So, within that framework, is it legitimate to simply separate the UI from the business logic so you can at least support your .NET, RIA...clients, absolutely.  Are there situations & circumstances where it makes no logical sense to separate the business logic down into distinct BL & DL layers, of course, just as there are situations where it makes sense to do the separation.  There seems to be a real hang up or perception that if I'm not doing all the level of separation as shown in the OERA that people are some how performing a heinous crime!  That's not the case, and certainly never a position that we have advocated.  It's all about choice and consequence, and the aim of the OERA is to present a set of choices, and yes there are consequences to those choices, but after reading the OERA materials you're hopefully in a more informed to position to make the best choice for you.

OK, so maybe that was also more words that it takes to code the solution, so I'll get off my soap box now

Posted by jmls on 18-Mar-2010 12:24

Thanks Mike,

I was just about to comment that if I were to commit my company to

using the progress database for the next 10 years, is there any point

in having a DL layer at all ?

Posted by Thomas Mercer-Hursh on 18-Mar-2010 12:39

First point was that this is in itself a simplistic view

Understood, but one that is simple enough that some of the pieces come close to disappearing, especially if one were to not include an option for a printed report done on the server.

One of the problems with the standard OERA diagram is that little box in the upper left which is the client UI really needs to include the entire structure all over again, at least for a fat client, which I think is what you are talking about here.  I.e., the client itself will have a presentation, business, and data layer.  The data layer gets its data from the interface, not the DB, but once it has it, it acts as a local data source for the client.  If you think in terms of MVC or MVP patterns, that really covers both the presentation and BL aspects of what is happening here, so you have the MVP components and a data component which is responsible for transmitting a data request to the server and receiving back the results and making them available to the Model.

Where does the BL sit ? In it's own layer ?

One of the problems with the standard OERA diagram is that it tends to get people thinking too specifically in terms of layers as if they were physically separate things.  Except for the client/server split and/or anything going out across the bus, this stuff is, of course, all happening in one [unfortunately] single-threaded AVM session.  So, a layer isn't a physical thing, but a conceptual or logical thing.  The main point here is that objects withn a layer are allowed to be more tightly coupled than objects between layers.  So, think of DL and BL connections as something that you want to have minimum dependencies and shared knowledge.

In the server-side layers in your example, without a printed report option, the function of the BL is merely to pass through a data request from the interface to the DL.  One makes the interface object separate from the BL object here because multiple sources could use the same BL object, e.g., a request across the bus or a different interface object which dealt with a different kind of client.  That BL object does nothing more than accept the parameters and make the request of the DL object.

My current thinking is that any kind of basic data linking, aggregation, summarization, etc. which is a fixed part of the data request happens in the DL where it is close to the DB.  In your description, that doesn't leave the BL anything to do except to notice when it gets the data back that the parameters specify a printed report and to rout the data to the printing object and then route a status on the printing back to the interface.  There may be cases where some server side logic beyond that would go into the server-side BL object, but note that things like presenting the data in multiple sort orders and such is actually a client function here.

There is almost as much words in the above sentence as there is code in  the entire 10year program example

Consider the source!  Actually, there isn't going to be much difference overall in the amount of code except for some additional wrappers in order to package it into separate objects and some parameter passing that isn't required if the data is local.  The benefit is simplicity and re-use.

I want a .net, RIA and ABL frontend. What I was debating was the ROI on  doing the OERA thing, as my time is precious

The potential for different front ends illustrates the advantage.  Your server-side BL requestor, BL printed report, and DL data accumulator objects will be exactly the same for all clients.  If the .NET is ABL GUI for .NET, then that and the pure ABL client will use the same interface object, otherwise there are three different interface objects.  The RIA one might have to have a nominal bit more intelligence ... certainly would if it was dumb WUI instead of RIA, but otherwise all three interface objects have the same interface to the BL component and a unique interface specific to the client type.  The client pieces are all different, of course, although ABL GUI for .NET and legacy ABL (LABL?) could share a client side DL component.

Understand that no one claims that doing OERA or OOP is less work for the first pass, particularlly on something simple that is not likely to require much debugging.  The benefit comes in reuse and in the ease of maintenance that is associated with smaller, more cohesive, single purpose objects.

Posted by Mike Ormerod on 18-Mar-2010 12:39

Not wanting to appear wishy-washy, it depends

If anything I'd ask a slightly different question, Is there a business benefit in separating my physical storage layout from my business logic.  For example, is it easier to write my business logic against a logical view of my data vs. the way it's physically stored in the db.  Is it simpler for my business logic to act on a single de-normalized view of customer called contact as opposed to coding to a contact table, an address table, a state table..etc...  Am I likely to want to make changes to my physical db structure because it's an old db that's been around for years and we've added extension fields of chr1, chr2, chr99 that don't actually look like they mean anything when you look at the db, but actually these fields store real data, and so at some point you'd like to change their names to reflect the real data.  Well if you code to a logical view and maintain the physical knowledge in a DL, you change the DL and the code in the layers above stays constant!

As you said, your time is precious, & time is money   So minimizing the number of places you have to maintain & make such changes has a benefit. 

Posted by Thomas Mercer-Hursh on 18-Mar-2010 12:45

I can't agree with advocating lazy layering.  You might get the current task done slightly faster that way, but you won't get re-use and you will have more complex component to mantain.  If you get oriented toward the layering mentality, the extra work required is trivial.  The hard part is figuring out the patterns up front.

I know it is politically correct to let people do what they want, especially since they are likely to anyway and one doesn't want to alienate them, but I also think it is important to stand up and say "this is better" and to figure out ways to help them get there, rather than just muddling along the way they have been.

Posted by Thomas Mercer-Hursh on 18-Mar-2010 12:46

Perhaps ABLy OOP instead!

Posted by Mike Ormerod on 18-Mar-2010 12:55

It's nothing to do with being politically correct, it's to do with making the appropriate architectural choices based upon your business strategy.  If someone came to me and said look, our app is a character app, it's used by guys working in a warehouse on VT-terminals, the last thing they want to do is pick up a mouse when it's throwing it down with rain outside as they are trying to book goods into the facility, so we're going to stay with a character hosted solution that doesn't follow the OERA, who am I to say well actually we know better!  It's about choice, and making an informed choice based upon your business & technical strategies.

Posted by Thomas Mercer-Hursh on 18-Mar-2010 13:02

if I were to commit my company to using the progress database  for the next 10 years, is there any point in having  a DL layer at  all ?

Yes, because swapping databases is only one, and one of the least likely, of the reasons for having a DL.

The primary reason is to create a separation between the data as used in the application and the data as stored.  Not only is this providing the OR mapping, but it means that you can change the definitions on either side without altering the other side, as long as the same data is there.  Consider a couple of examples.

Suppose you have a legacy ABL application that has been around for many years and someone, not you of course, has evolved parts of that application by adding field after field after field to some table in order to enhance the functionality.  In the process of implementing OO components on this table you realize that these additional fields actually come in mutually exclusive clusters, i.e., they really represent subtypes.  Good RDBMS design dictates that subtype information should be in a separate table because, depending on type, some of those fields are meaningless and that is a Bad Thing.  But, you have a ton of code accessing these tables so you can't just to The Right Thing and restructure them.  But, you can take your new OO code and create a generalization with subtypes so that the data is properly represented in the objects, if not in the DB.  The DL provides this mapping to/from subtypes.  Then, some glorius future day you finally have converted all of the code which accesses this data to OO, all using the same DL components, of course, so that you only had to handle the mapping problem once and you only had to create the BEs once.  On that day, you then fix the DB and make it Right again, make small changes in the DL component, and you are again hozro(1)

Or, suppose that you have all of your data in one database at the start and you are working on some kind of order processing function and need access to item information and so write yourself an item DL componet to get it.  Down the line, you move everything to an ESB and decide that the inventory data really should be on a machine in the warehouse(s) instead of on the machine where order processing is done.  No problem, you convert the OP item component to be a facade object that interacts with the real data source on the warehouse machine and touch nothing else in OP.

(1)HOZRO: the Navajo word meaning to be in harmony with one’s environment, at  peace with one’s circumstances, free from anger or anxieties, generally

Posted by jmls on 18-Mar-2010 13:15

Yippee! In that case, I've been writing BL and DL stuff for years   As an example, we recently moved from storing telephone numbers in an array [5] (home,work,fax,mobile,other) to a table-based format where we can now store 0..n numbers . We didn't need to change much to make the app work as before. So, if that's OERA I'm now going to claim patent rights, citing prior art ...

Posted by Thomas Mercer-Hursh on 18-Mar-2010 13:20

You are mixing domains.  The choice of GUI over ChUI is not a question of a better programming paradigm; it is a choice of UI device.  Yes, there are places where ChUI is not only just fine, it can be more productive than the average GUI (a reflection that the average GUI is often not that well designed for usability (Hello Arthur)).  Yeah, those guys in the warehouse don't want to fiddle with a mouse ... but, depending on the kinds of tasks, it might be they would be well served by ruggedized touch screens or hand held devices or any one of a number of other technologies, including perhaps voice response.

To be sure, I advocate being pragmatic.  If someone asks me to write a new CRUD screen for a new table, I'm not going to tell them that I have to transform the entire application to SOA on ESB with OERA layers before I can do that one job.  But, within the technology available, that doesn't mean that I can't create nice components to solve the new problem so that there is a small, incremental improvement in one corner of the application.

The world is full of good, better, best choices that we make all the time.  Sometimes, good is good enough because of cost or time, but I still think it is useful to clearly recognize good, better, and best so that one can make an informed choice because, sometimes, best is within reach and why not do it right if you can.

I'm not advocating scolding people who don't immediately choose best.  I am well aware of the issues of limited resources and spending the minmum necessary to get the immediate problem addressed.  But, I see no point in pretending that minimum necessary is best.  That is one of the reasons there are so many frightfully legacy systems out there ... every year minimum investment was made in urgent enhancements and fixes and no effort was put into modernization.  Well, now the application is terribly dated, hard to maintain, inflexible in response to changing business conditions, etc.  The price for that meagerness is potentially high and I think we have a duty to try to educate people about that.

PSC didn't invent OERA, they just put their own name on something a lot of other people have been doing for a long time.  It is an idea that needs fleshing out ... maybe we are ready for OERA 2.0 .... but either it is something we believe is better and we should be trying to guide people in that direction or we should be putting up other models for people to choose from because we think it doesn't matter.

Posted by Thomas Mercer-Hursh on 18-Mar-2010 13:28

It's good design, whether one is using OO or not ... and it has been around for a good 20 years, if not 30 in some circles so ggod luck with that patent.

The one difference you might see when you get farther and farther into doing it in an OO way is that you will find ways to create an even greater separation of concern.  Maybe not, since lots of people don't get that rigorous, but OO principles tend to reinforce thinking in those directions.

Posted by Admin on 18-Mar-2010 19:24

Excellent exsample, Mike O.!

Posted by Admin on 18-Mar-2010 19:34

Consider the source! Actually, there isn't going to be much differ

ence overall in the amount of code except for some additional wrappe

rs in order to package it into separate objects and some parameter p

assing that isn't required if the data is local.

The proof is still pending...

Posted by Thomas Mercer-Hursh on 19-Mar-2010 11:19

Not wanting to appear wishy-washy, it depends

The problem with going down this road is that it is all too easy for someone to see no apparent need *today* and thus to decide that the effort is not worthwhile.  Then, 1 year, 2 years, 5 years down the road when the need arises, it hasn't been done.  Or, next year, when you could reuse the data component, it isn't there available for reuse and needs to get written all over again.

Frankly, I think the incremental effort of making the separation is inconsequential.  It is just getting used to a particular style of programming.  Yes, it means two units of code instead of one, but they are smaller, more single purpose units and thus easier to verify, debug, and maintain.

Posted by Thomas Mercer-Hursh on 19-Mar-2010 11:22

What proof?  We aren't talking about M-S-E versus PABLO here, we are just talking about taking cohesive units of code and putting them in their own container.  This has been best practice since god was a young woman.

Posted by Tim Kuehn on 19-Mar-2010 11:36

I tried doing the "separation" thing a few times, and found it to be great when dealing with small amounts of data. It was an absolute dog performance-wise when it came to updating large amounts of data as each record had to be hit twice - once to read it, update the TT, then write it back out again.

Posted by Wouter Dupré on 19-Mar-2010 11:39

Hi, I'm out of the office for business. During my absence I will have no or limited access to my e-mail.

For immediate assistance please call our office at +32 (0) 15 30 77 00.

Best regards,

Wouter.

--

Wouter Dupré

Senior Solution Consultant

Progress Software NV

Stocletlaan 202 B| B-2570 Duffel | Belgium Direct Line +32 (0) 15 30 77 00 Fax +32 (0) 15 32 12 60 Mobile +32 (0) 478 50 00 49 wdupre@progress.com

Posted by Thomas Mercer-Hursh on 19-Mar-2010 12:55

Tim, are you talking about a mass update rather than the kind of mass access and summarization/sorting/calculation thing one associates with a report?

If so, I don't suppose that anyone doubts that the peak performance is going to come from going into the editor and doing a FOR EACH directly on the table.  The questions one has to ask oneself are:

1) How often does this need to happen;

2) How performant does it need to be, i.e., are we talking about batch or is there some need for real time responsiveness over a large batch of records; and

3) How willing are you to do this outside of the context of the normal business logic.

One of the use cases I think of is receiving a shipment for a item which has a large number of back orders.  There is a need to allocate that new stock against open orders, typically according to some rules like customer priority and original order date.  Doing this in the context of the full business logic is definitely going to be slower, even dramatically slower, than a tight, coded for the purpose loop, but at the same time it is the kind of process where I am going to fire off a report like function to do the work and want a report out at the end of what it did.  There is not real need for it to happen in real time while I am sitting at a screen.  So, in a context like that, the performance hit matters a lot less than keeping a clean structure so that I know I am always applying the same logic in all situations.  If there is a real time issue, i.e., I want to grab one off the loading dock to complete an order which is currently being picked, then that is better handled through a one by one process which will be perfectly fast enough.

Posted by Tim Kuehn on 19-Mar-2010 13:38

tamhas wrote:

Tim, are you talking about a mass update rather than the kind of mass access and summarization/sorting/calculation thing one associates with a report?

The situation in question had the system updating a "relatively" large number of records. I wondered why it took as long as it did, and examining the amount of activity going on showed a lot more "busyness" than I expected. This "business" was directly attributed to reading a record into a TT, updating it, and then writing it back to the DB. Updating the DB directly solved that problem.

Your comment about batch jobs, etc. is well taken - however there are times when even that's not appropriate. I've got some BL in one system which gets fed an "event", which can then result in (repeated) adjustments to an indeterminate number of records. The underlying BL is so convoluted there's no way I'd want to try and separate it from the database.

Now - if one considered this BL to be "part of" the DA, then the separation is still being done.

Question - is there a place where the different layers are clearly defined so I know that what I'm writing about and what others are hearing is the same thing?

Posted by Thomas Mercer-Hursh on 19-Mar-2010 14:25

Tim, as noted, you have to figure that direct update is going to be faster.  That isn't a surprise and there isn't really much to be done about it.  It is a question of what your requirement are and what the cost is for meeting those requirements.  If you create special code to update against the DB directly that is separate from the BL associated with that data elsewhere, you get the performance at the expense of maintainability since you now have BL in at least two separate places.  I know that most of us with legacy systems are thinking ... well, hey, I have BL scattered all over the place, so what's the big deal?  In that context, what's one more piece of separated BL?  Well, truthfully, not much, but if one is trying to move to a more maintainable system, then maybe it isn't the best idea to keep doing the things one did in the past to create all that spaghetti.  Can you imagine how nice it would be to go to one file and have all of the business logic that pertained to one entity in one place?  Think how much easier that would be to understand, especially if you were new to the system.

So, it is choice one has to make.  There are a lot of choices which are considered best practice in OO and layered architectures which do negatively impact performance.  They have to.  But, the benefits of encapsulation and separation of concern are considered valuable enough over the life of the system to pay that performance penalty.  Do people ever "cheat" when they run into a particular requirement?  Of course they do, but the question is, how easily do you let yourself get talked in to cheating.  If it is too easily, you might as well not bother trying.

As to layer definitions, the only ones available are necessarily vague.  This is, perhaps, particularly true in ABL because one is likely to have three layers within a single AVM ... and it isn't even multithreaded!  Moreover, a diagram like the usual OERA diagrams isn't an endpoint, but rather a starting point to get people thinking.  In real N-tier thinking, there are layers within layers and one might have six layers in one place where there are three in another.  It is all a question of separation of concern and defining clear, cohesive responsibility for each component.

Posted by Tim Kuehn on 19-Mar-2010 15:22

tamhas wrote:

Can you imagine how nice it would be to go to one file and have all of the business logic that pertained to one entity in one place?  Think how much easier that would be to understand, especially if you were new to the system.

Ummm.... I don't have to imagine - I've already accomplished this with my "managed procedures" system. No need to splatter the same BL all over the place - each BL "component" (or what-have-you) is in one spot. Just link the appropriate SP(s) to the current code block, call the appropriate API(s) in those SP's, and that's it!

Posted by Thomas Mercer-Hursh on 19-Mar-2010 16:47

Until you write the direct to DB update....

Posted by jquerijero on 24-Mar-2010 11:13

Your frustrations are valid. Your post implies the thought and implementation process that goes on with tackling the problem which sometimes makes it hard to deal with OO's separation of domains. One glaring thing I can recognize in your post is the ommission of one of the most important parts of group development, discussion of architectural pattern (Model-View-Controller (MVC), Model-Set-Entity (MSE)). This is the bridge between what you refer to as the "old", "new", and "future" developers. Pattern controls what kind of objects are created, what they do, and where they fit in the process.

Your problem is by the way a good sample where MVC shines.

MODEL: Report Class

VIEW: Crud Window

CONTROLLER: Scheduler Class

The MODEL can have both the business logic and data layer for small project, or you can use MSE to further separate the data layer from the business logic.

This thread is closed