Trying to find an "in" to OO

Posted by ojfoggin on 26-Feb-2010 12:49

I come from a background of OO programming.  I started with Java and more recently I've done some iPhone stuff with objective-C.

However, I'm the only one in our dev team who has used OO programming before.  One or two other people have seen it and some have a rough idea of how it works.

I've been trying to think of some way to apply OO principles to an area of our software that I can explain to people and they'll like it.  Whenever I have tried in the past the examples haven't been very good and people have said "Yes, but we can already do that" etc...

I came up with an idea today when trying to sift through a single report that doesn't return the right data.  It spanned over 7 .p files and well over 20 include files.  It got to the point where I could no longer follow the logic of the code and decided that on monday I'm going to rewrite it in one file and with a fraction of the code.  Upon starting to try and fix it I was even warned by my colleagues not to bother too much as this particular report is "beyond repair".

This is the second time this has happened to me this week and I've just had to "give up" with trying to fix something and start again from scratch.

Anyway, my idea was to find an OOP way of creating reports.  The problem I have is I don't know too much about what is possible with OpenEdge.

The rough idea of my class model is this...

Abstract class 1. Communication - contains methods for emailng, sending to printers, saving to files, etc...

Class 1. XMLDocument - contains a string (or something more suitable) that contains XML code.  Also contains methods for adding elements and formatting etc... that redirect to our SAX writer.

Class 2. Report - implements Communication class and contains its own XMLDocument class.  Maybe some methods to do some extra report type stuff.

Class 3. ItemReport - is a child of Report.  Contains a method for getting the item data and adding it as elements to its XMLDocument object.

Class 4. CustomerReport - again is a child of Report.  Contains method for getting relevant customer data and adding it to its XMLDocument object.

By using this we can quite easily change our XML writer or email settings or temp folder etc... just by changing the parent folders.  We could even create a method for Report that logs the report creation to a file each time it runs.  The biggest selling point would be not having to recompile everything once this is done.

Is this kind of model possible with OpenEdge?

Am I thinking about it in completely the wrong way?

Any tips are welcome!

BTW, I've got the gsoop.pdf document and I'm going to have a skim of it this weekend

Thanks!

All Replies

Posted by Thomas Mercer-Hursh on 26-Feb-2010 13:35

Am I thinking about it in completely the wrong way?

Possibly, but we don't necessarily have enough information.

I confess that I haven't thought a lot about OO and reporting since I mostly moved to using third party reporting tools a number of years ago, but ...

I am curious about the idea of an abstract class for communication that is implemented by the report.  Isn't most of the communication standard packaged code that gets/got written once and then is re-used?  If not, why not?

I'm not sure what the role of the XML here?  Do you select the data for the report and then package it as XML and then send it off to various consumers?  If so, I would take a step back and note that the XML is a form of message data packet so it should be something produced from the internal representation, not *be* the internal representation.

I'm not seeing any separation here of data, business logic, and presentation.

The data layer should be reusing or creating reusable components for access to the data.  The business logic layer should be doing the work of creating the report and you should be thinking about possible re-usable components there.  The presentation layer should be taking the results of that work and packaging it for communication to the user.

Posted by ojfoggin on 26-Feb-2010 13:55

The problem I am facing is that I am coming from "proper" OO languages and don't know how to relate my knowledge to what OE can do.

My model was more of an example than a real data model as I just wanted to test the water as to what was possible.  I have never done any OO beyond small to medium sized programs and so I'm looking for somewhere to start where I can be familiar with what I'm working with.

The program we have at the moment uses somewhere towards 1 million lines of code and things like reusability and sensibility are a distant dream at the moment.

It is improving but it i still not ideal.

How would OO typically be used in OE?

Posted by ojfoggin on 26-Feb-2010 14:26

Let me try to clarify what I am trying to say.

At the moment the software works through a UI on the client PC.

The user presses a button or chooses a program or enters some info.

This triggers an internal procedure in the program.

Upon doing this the program will run a persistent procedure (or non-persistent in some newer cases) and get the info (or validate info, save info, etc...).

Then the program will end the persistent procedure if needed.

The UI then uses the info and displays it to the screen some how.

That is the general model that 90% of the work in the program is done.

The rest of the work is done producing reports.  The system uses the appserver to create a queue request with info about what the request is.  This is then picked up and run by a report queue.  The report queue then runs whichever procedure is requested with the relevant parameters.  Upon doing this you tend to get an unintelligible snake moving in and out of inumerable files until the temp tables have been produced and the files are written (or whatever) and the resulting file is either sent to a printer or emailed to a user as a csv or xml or pdf etc...

I am trying to find somewhere in all of this that I can try to work out a way of getting some OOP in there.

This is where I am struggling.

I'm not intending on designing anything properly or starting any coding.  In fact I have no doubt that, when the time comes, we will be getting someone in to help with designing something.

I'd just like to get my head around how OOP can be used.

Posted by Thomas Mercer-Hursh on 26-Feb-2010 14:33

How would OO typically be used in OE?

I am inclined to answer that it would be used the same way in ABL as it is in any other language.  I have written a number of times that OO best practice in ABL might be different in some ways than OO best practice in a 3GL since ABL is, after all, a 4GL, but the more I consider the issue the more inclined I am to think that there isn't as much difference as some people would like to think.  To be sure, there ae some implementation differences. E.g., right now, the only apparent mechanism we have for implementing a collection or map class is a temp-table, which is a rather heavy-weight artifact compared to a Java collection class.  But, wrapped up in reusable generic code ( http://www.cintegrity.com/content/Collection-and-Map-Classes-OOABL ) it is going to get used pretty much the same way as a collection class is in Java and for the same reasons and in the same contexts.

Now, there are some people who think that ABL's greatest virtue is the ease and power of handling relational data.  One can see why they would think that.  So, those people tend to think that one of the ways OOABL should be different is that we should preserve the relational structure of data in the business logic, typically in the form of a PDS.  See http://www.cintegrity.com/content/Patterns-Managing-Relational-Data-OOABL for some discussion of this and keep an eye out for my forthcoming whitepapers which will cover this in greater detail.  If you agree with this position, you will want to consider one of the models presented for the purpose, especially the Model-Set-Entity patterns which is discussed on this forum since it manages to present a very OO-like appearance of the business entities and entity sets, aka collections even though the underlying implementation uses a PDS.

But, if you think that is not very OO-like ... and it isn't in a number of ways ... then you will want to think in terms of PABLOs, Plain ABL Objects and those should be pretty close to your OO experience.

I should probably note that there is a lot of nominal OO in the world which isn't necessarily very good OO, not just from programmers fresh out of school with a couple of courses under their belt, but from big Authorities.  I was re-reading parts of Martin Fowler's book on Refactoring lately and I was struck, not only by the rather cavelier attitude about slapping something together and then fixing it as time allowed, but on the fact that his refactorings are never tied back to the underlying OO modeling of the problem space.  I .e., he talks about moving around data or methods in order to limit coupling or to simplify something complex ... all good stuff ... but never talks about noticing that the cohesiveness of an object is something that should be based on the cohesiveness of entities in the problem space.  I.e, if the problem space had been decomposed properly in the first place, the data and methods should have been in the right place from the start and the refactoring is just fixing the fact that this was not done.

I do hope that you are saying that there is a million lines of code in the application and not in the one report!

If you can get some interest from the company in the potential of OO as a way to improve your application going forward, I would suggest bringing in a consultant (hint, hint) to do a bit of orienting and help get you off the ground on the right foot.

Posted by ojfoggin on 26-Feb-2010 14:43

Thanks!

I think I'm definitely going to have to do some more reading.

I'll def be following your link and having a good read.

Posted by Thomas Mercer-Hursh on 26-Feb-2010 15:44

The answer is, the samv, only different ...

You can approach this from either end.  From the bottom, take the exisitng program(s) and start identifying self-contained, cohesive units of work.  Those are candidates for being turned into classes.  Move the logic for one into class form, change the calls, and you are back to a functional program again.  Do it again with another piece.  Each step should make the result easier to understand because each unit of functionality will be wrapped in a box where you can forget about how it does it and just pay attention to what it does.

Or, start at the top.  Think how you would do it in Java or whatever (omitting the libraries you would have there, but will have to create here).  If you can write a good OO Java application, then the OOABL one won't be substantially different.

Think in terms of the OERA layer separation.

Think about what is good to do on the server versus what is good to do one the client.

Slice up the functionality into coherent, cohesive units with minimal coupling.  Turn each unit into a class and hook them togehter.  Presto!

Really, all of this is a good way to be thinking about .p structure too.  If you look at the patterns and principles white papers on my website, you should be able to think how to use most of the same ideas applied to .p code.

Posted by cwills on 01-Mar-2010 07:51

Attached is a class wrapper for the SAX writer. You might find it helpful with implementing 'class 1'.

A small warning though, it's not completely tested.

Cheers,

Posted by Peter Judge on 01-Mar-2010 08:24

The problem I am facing is that I am coming from "proper" OO languages and don't know how to relate my knowledge to what OE can do.

I would suggest using OOABL in the same way you've used other OO languages. There will certainly be stuff that's different (temp-tables etc), and some stuff that's missing or not working as you'd expect (and I'd ask you to report this to Tech Support), but it's still OO, and the fundamentals still apply. What's important is the application design, and less so the language in which it's implemented (if you look at Fowler's patterns online or even at coding concepts on Wikipedia, notice that they're all shown in more than one language).

-- peter

Posted by Phillip Magnay on 02-Mar-2010 07:13

tamhas wrote:

Now, there are some people who think that ABL's greatest virtue is the ease and power of handling relational data.  One can see why they would think that.  So, those people tend to think that one of the ways OOABL should be different is that we should preserve the relational structure of data in the business logic, typically in the form of a PDS.  See http://www.cintegrity.com/content/Patterns-Managing-Relational-Data-OOABL for some discussion of this and keep an eye out for my forthcoming whitepapers which will cover this in greater detail.  If you agree with this position, you will want to consider one of the models presented for the purpose, especially the Model-Set-Entity patterns which is discussed on this forum since it manages to present a very OO-like appearance of the business entities and entity sets, aka collections even though the underlying implementation uses a PDS.

But, if you think that is not very OO-like ... and it isn't in a number of ways ... then you will want to think in terms of PABLOs, Plain ABL Objects and those should be pretty close to your OO experience.

The CloudPoint ORM (developed by Progress Professional Services) that utilizes the aforementioned MSE pattern is 100% adherent to SOLID OO principles. It's not just very OO-like - it is completely OO. It definitively demonstrates that one can leverage PDSs while adhering to generally accepted OO principles and practices (ie, SOLID).

On the other hand, putting aside the fact that simply adopting PABLO does not necessarily mean adherence to OO principles, testing has shown that this approach is at the very least 50 times slower and 3 to 5 times more resource intensive than using a PDS approach.

Phillip Magnay

Senior Principal Architect, NA Professional Services

Progress Software

Posted by Peter Judge on 02-Mar-2010 07:52

Phil,

On the other hand, putting aside the fact that simply adopting PABLO

does not necessarily mean adherence to OO principles, testing has

shown that this approach is at the very least 50 times slower and

3 to 5 times more resource intensive than using a PDS approach.

Can you clarify what "this approach" means? And is a "PDS approach" something like M-S-E? Or more traditional PDS-using-procedures?

Thanks,

-- peter

Posted by guilmori on 02-Mar-2010 08:02

pmagnay wrote:


On the other hand, putting aside the fact that simply adopting PABLO does not necessarily mean adherence to OO principles, testing has shown that this approach is at the very least 50 times slower and 3 to 5 times more resource intensive than using a

PDS approach.

Yes, that is the kind of unacceptable performance we are seeing.

There is no performance problem in other OO language.

When is this gonna be fixed ?

Posted by Phillip Magnay on 02-Mar-2010 08:04

"This approach" means PABLO: where plain ABL objects are created and inserted into collections, and data from the database is moved into and out of these objects in those collections.

By "PDS approach", I was referring to CloudPoint's ORM M-S-E, not non-OO PDS approaches. However, non-OO PDS approaches will also be at least 50 times faster than the PABLO approach.

Phil

Posted by Phillip Magnay on 02-Mar-2010 08:23

Everyone that I am aware of who has tried the PABLO approach has experienced the same unacceptable performance.

As for supposedly "fixing" it, why is any such "fix" necessary?  As we have shown through developing M-S-E, PDSs can be utilized while remaining completely adherent to OO principles.

Phil

Posted by guilmori on 02-Mar-2010 08:46

pmagnay wrote:

Everyone that I am aware of who has tried the PABLO approach has experienced the same unacceptable performance.

As for supposedly "fixing" it, why is any such "fix" necessary?  As we have shown through developing M-S-E, PDSs can be utilized while remaining completely adherent to OO principles.

Phil

I am still eagerly waiting for the inners of the M-S-E, so I may be doing wrong assumption here.

It seems to me that M-S-E adhere to OO principles, but only from the consumer point of view, ie: until we go into the private definition of the business model, which I think is using mainly relational concepts.

Some say that since its private, it's not important how the data is stored in memory (PDS). But I do not agree.

It is not a "do and forget" kind of class, there will be probably a lot of maintenace/improvement to be done inside these classes, and I don't like having to flip the switch OORelational all the time.

Why can't we live in a full OO world

But nonetheless, even if the M-S-E fix the performance problem, it is not a reason to discard the simplicity of PABLO approach, which is widely used in other OO language.

Posted by Phillip Magnay on 02-Mar-2010 10:11

guilmori wrote:

I am still eagerly waiting for the inners of the M-S-E, so I may be doing wrong assumption here.

It seems to me that M-S-E adhere to OO principles, but only from the consumer point of view, ie: until we go into the private definition of the business model, which I think is using mainly relational concepts.

Some say that since its private, it's not important how the data is stored in memory (PDS). But I do not agree.

It is not a "do and forget" kind of class, there will be probably a lot of maintenace/improvement to be done inside these classes, and I don't like having to flip the switch OORelational all the time.

Why can't we live in a full OO world

Such as expectation is unrealistic.  Every ORM implementation must deal with relational structures at some point. That's why they're referred to as Object-Relational Mapping. The PDS in M-S-E is a private internal relational structure which is completely encapsulated. This and every other class in the pattern is fully OO-adherent.

But nonetheless, even if the M-S-E fix the performance problem, it is not a reason to discard the simplicity of PABLO approach, which is widely used in other OO language.

Sticking with PABLO while conceding that its performance is unacceptable when there is a significantly better performing, OO-adherent alternative is your choice. But then you must accept the adverse consequences of that choice.

Posted by guilmori on 02-Mar-2010 10:40

pmagnay wrote:

Every ORM implementation must deal with relational structures at some point. That's why they're referred to as Object-Relational Mapping.

I completely agree with this, but I think the ORM is best to be isolated in the data access layer, where the 4GL nature of ABL (compile time access to datasource & ProDataSet) really shines.

Business layer should focus on a pure OO model, which is better in my opinion to model complex business logic and prevent its complexity to enter the code.

The PDS in M-S-E is a private internal relational structure which is completely encapsulated.

Yes, but it reside in the business layer... So you have another mapping to do to transform PDS records into class instances, no ?

Anyway, I may be arguing with very foggy glasses, since I don't know all the details. I'll continue to wait.

Posted by Phillip Magnay on 02-Mar-2010 11:33

guilmori wrote:

I completely agree with this, but I think the ORM is best to be isolated in the data access layer, where the 4GL nature of ABL (compile time access to datasource & ProDataSet) really shines.

Business layer should focus on a pure OO model, which is better in my opinion to model complex business logic and prevent its complexity to enter the code.

The PDS in M-S-E is a private internal relational structure which is completely encapsulated.

Yes, but it reside in the business layer... So you have another mapping to do to transform PDS records into class instances, no ?

Anyway, I may be arguing with very foggy glasses, since I don't know all the details. I'll continue to wait.

You either accept the principle of encapsulation. Or you do not.  The private internal data structure of a class is not any other class's business.

Hardly justification for choosing the PABLO approach which you clearly conceded has unacceptable performance.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 11:41

Reasonable people can differ about what they consider good OO.  Phil and I disagree on a number of points.  The CloudPoint M-S-E pattern is clearly a very thoroughly thought through pattern and should be looked at closely by anyone who is considering a PDS-based solution.  But, I do feel that there are aspects of it which are bothersome if one is trying to adhere strongly to OO principles.  For starters, having a business entity which encapsulates behavior, but derives its data from another object is disturbing given the basic expectation that entity objects will encapsulate both data and behavior.  The BE *appears* to hold the data from the perspective of a client object, which is good, but the data is actually somewhere else.  There are other issues I will cover in my full review.  They may or may not concern you.  There is certainly a lot about M-S-E which is interesting and well-considered.

On the other hand, putting aside the fact that simply adopting PABLO  does not necessarily mean adherence to OO principles

Of course not.  Neither does using M-S-E.  Within the M-S-E pattern there are clearly choices one can make and those choise can be either better or worse according to one's own OO principles.  We have established that we don't agree on all of those choices and others will have to make their own choices.

testing has shown that this approach is at the very least 50  times slower and 3 to 5 times more  resource intensive than using a PDS approach.

If such test data exists, I would very, very strongly encourage you to publish it so that we can evaluate the test., both to determine whether or not it represents the actual pattern one proposes to use and so we can evaluate whether the difference is meaningful. I can readily think of several versions of how such a test might be done which would tend in this direction, but I can also think of other versions where I don't currently expect it.  I will eventually be doing some of my own testing, but if you've already done some, let's bootstrap the process and get the info out there.

I don't know if most readers are aware that there is much in good OO practice which negatively impacts performance.  E.g., working to loosely couple two subsystems or layers is likely to produce an interface that is less efficient that, for example, passing something structured like a temp-table.  But, loose coupling is good for maintenance, so there is a trade-off choice to make.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 11:50

I have a whole family of "approaches" which might all be described as PABLO, some of which I would expect to perform less well than others, so it is important to clarify what is being tested.

I, and I expect others, would be very interested also in hearing your explanations for a 50X differerence.  Take, for example, a task like reading an order and its lines from the database and repricing the lines using a new discount structure.  However this task is broken into layers and packaged, at some point in the process one is going to create one BE per line, execute a method, persist the data, and delete the objects.  The same amount of data will get read from the database.  The same number of BEs will get created (assuming M-S-E as the exemplar for the PDS version) and deleted.  The same methods will get run.  The same data will get persisted.  So, where exactly in the cycle does one get a 50X penalty or advantage?  Now, if you use a one line TT in every BE as some have suggested in order to take advantage of the serialization methods, then I can see that creating all those TTs would have a big performance impact.  But that isn't my idea of a PABLO.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 11:55

I order to know what needs fixing and whether or not it needs fixing, we need to know where the performance hit is.  If there really is a 50X difference, that strongly suggests that something is wrong and needs fixing ... even if you feel that you have managed to program around it.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 12:10

Every ORM implementation must deal with relational structures at some  point.

Every OO application which strores its data in a RDBMS has to deal with ORM, but every other OO language does so deep in the data layer.  M-S-E is persisting the relational model deeply into the business logic layer.  No matter how much you wrap that to make it appear like objects from the outside, it is not something that one would do in any other OO language.That you have made it work well for you, doesn't mean that everyone else should have to make the same choice.

This and every other class in the pattern is fully OO-adherent.

We know that you believe this, Phil, and I respect the strength of your convictions, but not everyone else sees things the same way you do, as evidenced by the difference in your and my view of Decorator.  Surely, it isn't hard to understand that someone might find the idea of an intensely relational structure firmly emplanted in and core to the functioning of the BL layer is a disturbing idea from a traditional OO perspective.  Likewise, the idea of a "collectioni" which contains nothing and whose key implementation is in another object is not exactly traditional OO thinking, no matter how much it behaves like a traditional OO collection from the outside.  There needs to be room for other versions of "right", especially since only a tiny bit of M-S-E has been exposed to the public.

Imagine, if you will, one of the authors of AutoEdge singing the praises of what a wonderful exemplar of OERA it was, but the only thing published was one UML diagram and a few discussions ... no download and documentation in which we can judge for ourselves how good or bad an exemplar it is.

Posted by Phillip Magnay on 02-Mar-2010 12:24

tamhas wrote:

This and every other class in the pattern is fully OO-adherent.

We know that you believe this, Phil, and I respect the strength of your convictions, but not everyone else sees things the same way you do, as evidenced by the difference in your and my view of Decorator.  Surely, it isn't hard to understand that someone might find the idea of an intensely relational structure firmly emplanted in and core to the functioning of the BL layer is a disturbing idea from a traditional OO perspective.  Likewise, the idea of a "collectioni" which contains nothing and whose key implementation is in another object is not exactly traditional OO thinking, no matter how much it behaves like a traditional OO collection from the outside.  There needs to be room for other versions of "right", especially since only a tiny bit of M-S-E has been exposed to the public.

Never have I claimed to have the one right approach. But not once (after countless requests) have you indicated a single OO principle that M-S-E is violating. Yet you continue to characterize it as not following OO and on the other hand claim that PABLO is. No matter how many times you repeat something, it doesn't make it true. 

You are entitled to your opinions but people are also entitled to know why you need to portray every other solution except yours as somehow lacking in OO.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 12:25

Sticking with PABLO while conceding that its performance is unacceptable  when there is a significantly better performing, OO-adherent  alternative is your choice. But then you must accept the adverse  consequences of that choice.

We may be stuck with the adverse consequences, but there is no reason to accept it as necessary.  For starters, we need to know what it is that produces that difference.  There might be lots of other ways to solve the problem which would be more aesthetically pleasing to some of us.

The real key issue here is understanding where the performance hit is coming from, whether or not it is understandable, and whether or not there are any reasonable alternatives.  Think of the problem Tim Kuehn was having with too many temp-tables.  Once that problem got properly exposed, two things became apparent.  One was that the immediate cause of the problem was a result of the way in which TTs were used in the application, something unlikely to be experienced by most other people and something for which there were not only alternatives, but one could argue the alternatives were a better design.  The second was that options were revealed by which substantial tuning could occur which dramatically improved the situation.

We need a similar discovery process here.  50X is simply not reasonable given that one is basically creating the same number of objects with the same amount of data.  Where is the performance difference coming from then?

E.g., if it were unusally expensive to create an object, I can see that a PDS approach which simply operated directly on the lines of the PDS might be a lot faster than one that needed to create and delete an object for every line.  Were that the case, then I would say that PSC hadn't actually yet given us an effective OO implentation.  But, that can't be the case with M-S-E because you are creating the same number of line objects.  So, where is the hit?

Note that I can imagine there is a performance issue with an application which needs a large number of collections since right now that implies a large number of TTs, if one is going to have a real collection and not a pointer.  That might create a good argument for enabling PLO fields in work-tables to lessen the overhead when the collection was a simple one.  That is something we can understand, test, and lobby for.

Simply claiming M-S-E is 50X faster than PABLO isn't sufficient.

Posted by Phillip Magnay on 02-Mar-2010 12:29

It's what we found.  Apparently Guillaume also found it unacceptable.  So have others.

Perhaps you could send me your PABLO solution and we'll redo the testing on it.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 12:36

You either accept the principle of encapsulation. Or you do not.  The  private internal data structure of a class is not any other class's  business.

Encapsulation is a appropriate property of an OO solution.  It does not, however, justify putting anything you want inside an object and deciding that it is OK because it is encapsulated.  E.g., it clearly would not be good OO to put direct database reads inside an object in the BL no matter how invisible that read was to anything outside the object.

Conversely, one can't help but observe that an M-S-E BE does *not* encapsulate the data and behavior of a BE as one would expect it to from traditional OO design, but rather the data remains in the Model.  You make it appear that the data is in the BE to any client of the BE, which is good, but the data isn't really there.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 12:47

Never have I claimed to have the one right approach. But not once (after  countless requests) have you indicated a single OO principle that M-S-E  is violating. Yet you continue to characterize it as not following OO  and on the other hand claim that PABLO is. No matter how many times you  repeat something, it doesn't make it true. 

I am sorry that it is taking me so long to publish my analysis.  It seems pretty clear, however, that when I do, you are unlikely to agree with it and so I am not sure that the situation will have changed much.  This seems evident by our continuing disagreement over what the GoF Decorator pattern is supposed to be used for, regardless of any question of the use of that pattern or construct in M-S-E.  I have, in fact, made some of my points in this very discussion including the relational structure in the BL, the data not being in the BE, and the core implementation of the ES being in the model instead of the ES.  I believe I have previously remarked on the lack of normalization in the use of Model accessors in the pre-10.2B implementation, which you have now improved.  At this point, I don't expect you to agree with any of these points.  That doesn't make it wrong for me to point them out.

For other people, one of the biggest problems is that they have next to no information on M-S-E.  I only have a smidgen because of our off list discussions.  I will be trying to document that for other when I can get to my whitepapers, but really, if M-S-E is to be held up as an exemplar for how to do OO in ABL effectively, it needs to get documented and disseminated.  How is Guillaume, for example, to know whether he likes it or not.  All he has is one UML diagram, two threads on PSDN, and your assertion that it is OO squeaky clean and 50X faster.  That is a lot to swallow without more details.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 12:48

Without so much as a description of the test, how would I send you anything?

Posted by Phillip Magnay on 02-Mar-2010 13:09

tamhas wrote:

Without so much as a description of the test, how would I send you anything?

Let's start with something simple. Just send your current PABLO BE implementations for Customer (inc/ 2 or more sub-classes), Order (inc/ 2 or more sub-classes), and OrderLine (inc/ 2 or more sub-classes) that map to the tables of the same name in the sports2000 database.

The test that will be re-run using your implementations (instead of ours) will consist of

1) Processing all the 1122 BEs corresponding to records in the Customer table.

2) For every existing Order BE associated with each Customer BE, a new OrderLine BE will be added (random item, qty, discount), an existing OrderLine BE will have an additional 5% discount added and extended price recalcuated, another existing OrderLine BE will be deleted.

3) For every Customer BE, a new Order BE (random subclass) will be created with 3 OrderLines BE (different sub-classes, random item, qty, discount).

4) Each Customer BE will have have Balance updated according to the changes in the Order and OrderLine data.

5) All changes to the BEs to be persisted in the database.

When can you send this out?

Posted by Thomas Mercer-Hursh on 02-Mar-2010 13:43

There are quite a few things unspecified here.

What is the basis for the subclasses in each case?  I haven't looked at the sports2000 schema in a long time, since it is so painfully simplistic, but presumably to be any kind of reasonable test, we should be using the same subtyping model.

Second, it isn't actually very meaningful for me to send you just BEs.  At a minimum I am going to have to build data access components, a messaging system, a BL factory for instantiating the BEs, and a client to do the processing.  At last count I think I had 11 different ideas about how to do the messaging, so I have a fair amount of experimenting to do before I am ready to publish.  I'm also going to need a few more details.

I also have some questions about the test.

It seems as if this is a test in which a relatively large number of BEs are created that are not accessed or used in any way.  This, of course, could have performance implications and could be an indicator for a lazy load pattern.  Did you try that in your testing?

I presume that you are using sports because it is ubiquitous, but I have a problem about tests based on sports because they can't possibly reflect what happens in real applications.  I have been working on building some models corresponding to the kind of complexity represented in my Integrity/Solutions application and they are just massively more complex than anything one could do in sports.

Likewise, I would assume that you are doing all orders for all customers in order to create a substantial body of work so there is a meaningful volume to measure, but as a result the test is about something that doesn't correspond to what happens in real world applications, at least not typically.  In particular, I would say that typical operational patterns were more about reading an order and doing a bunch of work on it and then putting it back rather than reading large numbers of orders and doing a very small amount of work on each one.  Yes, it happens, but it isn't typical of day to day performance.

In particular, were you to create a PDS with customers, orders, and lines all in the same dataset, fill that from the database in a single operation, and then process on that data set, I would expect that to be significantly more efficient than creating a separate object for everything, most of which receive no processing and even the few which do receive very little.  I would expect, for example, that if I created a data layer which had the same comprehensive PDS and filled that in a single operation and then built the BEs from that, it would be more performant than separate reads for every BE.  That is a data layer optimization which one decide to include in an application if there are sufficiently frequent contexts for doing such a mass update.  The combination of a single FILL for the PDS in the DL and lazy load for instantiation of BEs which were actually going to be processed is likely to have a big impact.

Conversely, if the typical application task was a single order, its lines, and something about the associated customer, i.e., one line in both the top two tables of the PDS, then the PDS is a rather heavy solution for something which is only going to take a handful of objects to represent.

So, not only is this not a question of my just whipping up a couple of classes, but I have some fairly significant questions about the significance of the test.

Posted by Phillip Magnay on 02-Mar-2010 14:05

Then define the test however you would like.  As long as it is a reasonably realistic scenario, exercises all the relevant concepts (BEs, sub-classes, BEs in association, collections/set of BEs, processing using polymorphism, O-R mapping, data persistence, protection of referential integrity), uses a adequately large volume of data, and can be easily repeated for the different options.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 14:26

I have been talking for a couple of months about defining such a thing, starting with a reasonably entierprise-class model of an Order and Customer.  That would provide a foundation for at least being able to compare modeling approaches, e.g.., your layering approach in M-S-E versus generalization and delegation.  Then, for testing, what I would like is not one big stand-alone end to end test, but a series of tests which each relate to some particular type of operation which one might encounter in a real application.  Then, one would discover if there were significant task to task differences in performance impact of various approaches.  I would also do a lot of micro tests, i.e., very focused tests on alternative approaches to doing one specific thing such as the DL to BL interface.  This would allow choosing a strategy based on a combination of performance metrics and the OO appeal of the particular approach.  I wouldn't advocate any approach blindly based only on "purity".

But, this is clearly not something I can whip up over the weekend.

Posted by Admin on 02-Mar-2010 16:12

Why can't we live in a full OO world

The ABL is highly specialized in working with relational data. I see it as a large advantage - rather than a disadvantage. There are other programming languages available for those that want to get rid of relational structures in code.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 16:21

So, we should just forget all this OO stuff, except as needed to deal with .NET?

Posted by cwills on 02-Mar-2010 16:56

I think OOABL has its uses, particularly in the User Interface and Business Logic Layers.

But I do not think it's particularly good at modeling/encapsulate business entities.

I've tried to implement OOABL business entities and I concluded that it's kind of a square peg, round hole problem.

Mike is right, we should embrace the relational data features of ABL rather than rejecting them.

OOABL is not Java/C#...

Posted by Admin on 02-Mar-2010 17:05

tamhas schrieb:

So, we should just forget all this OO stuff, except as needed to deal with .NET?

Not at all! Even though I'll never get used to the "new" name of the language, I like the fact the the B stands for business as in business logic. The ProDataset (and other relational structures like the FOR EACH etc.) are fundamental part of the language. I'd never disallow them from the business logic in this language.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 17:10

Neither would I ... in their place.  But, I'm also not going to let them dictate the whole usage pattern.

Posted by Phillip Magnay on 02-Mar-2010 17:15

cwills wrote:

I think OOABL has its uses, particularly in the User Interface and Business Logic Layers.

But I do not think it's particularly good at modeling/encapsulate business entities.

I've tried to implement OOABL business entities and I concluded that it's kind of a square peg, round hole problem.

Mike is right, we should embrace the relational data features of ABL rather than rejecting them.

OOABL is not Java/C#...

One of the primary motivating factors behind developing M-S-E was to debunk this false either-or choice between traditional procedural/relational ABL and the purist OOABL that rejects any relational elements. We tried multiple different approaches and we simply found that the purist OOABL approach is not viable in terms of performance or resource utilization. However, in the process we did determine that it is possible to leverage relational structures such as PDS while maintaining strong OO principles. 

So I definitely agree with you that OOABL is not Java or C# and therefore we should not reject these very powerful relational features. That said, it is definitely possible to develop OOABL business entities which leverage these features and adheres to accepted OO principles.

Posted by Phillip Magnay on 02-Mar-2010 17:16

OK. Let us know when you are ready.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 17:38

And I have to agree with Phil here that M-S-E does a very good job of presenting BEs and ESs (collections) which look OO like to their client objects.  They have done that part really well, better I think than any of the other PDS-based patterns.  This is obviously a pattern built by someone who cares about and values OO for its virtues.They have worked very hard at improving the design and the most recent version has eliminated one of my earlier objections.

Posted by guilmori on 02-Mar-2010 17:41

mikefe wrote:

The ABL is highly specialized in working with relational data. I see it as a large advantage - rather than a disadvantage. There are other programming languages available for those that want to get rid of relational structures in code.

Fine. Then PSC should make this clear, and stop telling to use OOABL in the same way we use other OO languages.

Posted by Phillip Magnay on 02-Mar-2010 17:51

Who told you to unthinkingly use OOABL exactly the same way as other OO languages? And even if this were the case, why blindly follow any such direction? I'm sure you are capable of independent thought. And responsible for your own decisions.

Posted by guilmori on 02-Mar-2010 18:00

pmagnay wrote:


We tried multiple different approaches and we simply found that the purist OOABL approach is not viable in terms of performance or resource utilization.

I'm still wondering why can't we expect improvement in the language, instead of *patching* (no offense Phil) around the problem ?

I assume being a purist OOABLer, but I really don't like not being able to implement this purism in my code.

My point is I do not want to argue over relational vs OO in the business layer, I should be able the choose the way I prefer to model.

In C#, I have the option to use a relational or OO model, with no perceivable performance overhead on a reasonable amount of data.

Posted by cwills on 02-Mar-2010 18:17

Just as a side note - there is a noticeable lack of example code on this forum.

A lot of discussion/babble, but not much code. Particularly from the most frequent poster/s?

“Example isn't another way to teach, it is the only way to teach”

- Albert Einstein

Posted by Thomas Mercer-Hursh on 02-Mar-2010 18:18

And, more to the point in the interim, I should be able to identify exactly what the problem is so that, if I have to compromise my pure design, I should be able to do it targeted to the specific problem so that I can change that specific workaround when and it it is no longer a problem.

From your description of the test, it sounds like both M-S-E and PABLO need to create one each of all customer objects, one each of all order objects, one new line object per order, and instantiate two exisitng line objects per order, one to delete and the other to modify.  So, other than the Model which M-S-E adds to this, it seems like both are creating the same number of objects.  If so, one would assume that object creation per se is not the problem.  If you are creating every line for every order in the PABLO version, then the issue isn't PABLO performance, but a non-comparable test.  If the problem is that your PABLO objects were using a one row TT, then we need to rerun the test with PABLO objects using properties.

Likewise, if you are reading all the data for M-S-E with a single FILL() and the PABLO version is reading each record with a separate read, then the problem is that the test is not comparable.

With more information, we can decide whether we have ideas for "fixing" the problem without going the PDS in BL route.

Posted by Phillip Magnay on 02-Mar-2010 18:20

You just missed a perfect opportunity to lead by example.

Posted by guilmori on 02-Mar-2010 18:30

I do not like the adjectives you added to my sentence, but this has been mentioned in this thread.

Please Phil, tell me what makes OOABL so different from other OO languages, apart from missing functionalities and poor performance ?

I do not want to start a war, but I do not see why the OOABL should be treated so differently.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 18:32

I agree that sample code is needed.  Indeed, one of my (probably vane) hopes is that if I defined a good solid enterprise-class type example, that multiple people would implement it so that we could do real head to head comparisions.  Unfortunately, I'm not there yet and I admit I have a lot of testing and experimentation to go through before I get to a position where I am sure specifically what I want to recommend.  But, I will be sharing the testing and ideas as I go along.  If I had a specific customer to do the work for, it would naturally get prioritized, but I'm trying to do a bunch of things at the same time at the moment.

Posted by Phillip Magnay on 02-Mar-2010 18:39

Java is different from C#... which is different from C++... which is different from SmallTalk... which is different from .


Each has their respective strengths and weakness, very real differences.

So why should OOABL be expected to have absolutely no differences with these other languages?

Posted by cwills on 02-Mar-2010 18:42

I will definitley post example code if I have something to offer.

In fact, if you scroll to the bottom of this thread, you will find a small example that I posted in reply to Oliver Foggin's original post before this thread was hijacked.

I just think some example code would better present your different ideas and help remove any ambiguity/confusion.

Particularly in regard to this ongoing ORM discussion.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 18:53

Each is different, but there is also a lot of similarity.  None are going to do relational stuff in the BL.  None is going to encapsulate logic in one place and have it access data in another object.

I have said repeatedly that the best OOABL is probably going to have its own flavor, just as each of these languages has its own flavor.  What Guillaume seems to want is the ability to program in OOABL the same way he could in one of these other language ... not exactly, of course, but equivalent.  On top of that base, he could then find ABL specific patterns to make it better.  I can understand why he doesn't like the idea of having to go to another pattern from the start because the pattern he wants to use doesn't perform.

Posted by Phillip Magnay on 02-Mar-2010 18:55

Great. But just keep in mind: a open public forum like this can and will go anywhere. And the beauty is: if you don't like where it going, you can move on and start another thread.

Posted by guilmori on 02-Mar-2010 19:03

Sure, there are many differences: syntax, ide, static vs dynamic typing, object vs primitive types, value vs reference,multiple inheritance, JIT compiler, countless libraries so you don't reinvent the wheel, ...

But not being able to instantiate a reasonable amount of istances because of performance issue is not a difference the OOABL should be known for.

Posted by Phillip Magnay on 02-Mar-2010 19:09

If the ideal that some people want is not there, then there is always the option to go over to the ERS and submit enhancement requests and wait.

Some people have the luxury to wait for their ideal to come along. I do not. I have to deal with the imperfect in the here and now.

Would I prefer if OOABL has some built-in ORM capabilities that performed as well as existing relational elements? Sure - who wouldn't. But such features do not exist and are not likely to in the short term.

So one can complain and wait. Or use the ERS and wait. Or simply deal with an imperfect world and get on with things.  I'm sure you can guess what I prefer to do.

Posted by guilmori on 02-Mar-2010 19:26

pmagnay wrote:

If the ideal that some people want is not there, then there is always the option to go over to the ERS and submit enhancement requests and wait.

Some people have the luxury to wait for their ideal to come along. I do not. I have to deal with the imperfect in the here and now.

Would I prefer if OOABL has some built-in ORM capabilities that performed as well as existing relational elements? Sure - who wouldn't. But such features do not exist and are not likely to in the short term.

So one can complain and wait. Or use the ERS and wait. Or simply deal with an imperfect world and get on with things.  I'm sure you can guess what I prefer to do.

We did *complain* about the performance issue and other OO features, directly in person to Salvador Viñals and Ken Wilner, at PSDN Line in Boston, and also sent many performance test samples to Tech Support. It's been almost 2 years now, nothing in the roadmap...

We are really wondering where is OOABL in the priorities, except for the new .Net UI.

Posted by Phillip Magnay on 02-Mar-2010 20:07

Priorities are just that: priorities.  If what you have requested has not been forthcoming, then the people who decide these things didn't see it as a high priority. You either didn't make your case effectively or what you requested simply didn't have widespread support.

You must deal with that reality.  You can just continue to complain and wait for something that may never come. Or you can move on and work with the functionality that is currently there.  I personally see no point in just complaining and waiting.

Posted by Thomas Mercer-Hursh on 02-Mar-2010 21:36

Like you, I advocate making the best of what is available right now.  But, you seem to have more faith in the system for gettting customer input into priorities than anyone outside of PSC has.  I know that PSC keeps telling us that they pay attention to the ERS, but there are many examples right there within the ERS where it is clear that an idea has been dismissed without understanding, not after deep study, but just based on confidence in the status quo.  The reality is that much of the innovation is driven by a combination of input from development and a few major partners.  Yes, it is possible for individuals or small groups to make better business cases than they often manage to make ... take the ChUI browse thing for example ... but how many people out there in the trenches know about the politics of building a strong business case.  To them, the idea is obvious and they can't understand why everyone doesn't get it ... especially since there are a lot of other people who do get it, just not the people who matter.

This is a very complex area.  Tim Kuehn is absolutely convinced that deferred instantiation of TT should be a high priority because he has experienced a big problem with it.  Other people think that the architecture which caused it is the primary problem and, while it might be a nice idea, it isn't that important.

If most of the people trying to use OO are doing some flavor of PDS-based patterns, then not many people are trying to use a PABLO approach.  So, if there is an inappropriate performance problem getting in the way of the PABLO approach, most people aren't going to notice or care.  But, one should, at least, identify the problem.

Clearly, using one row TTs as the base for a simple object is a problem because a TT is too heavyweight a construct.  Appealing in a way, but inappropriate.  We have figured that out.  Maybe work-tables would be better if they supported PLOs and/or had WRITE-XML functionality ... maybe not.  It is an answerable question.  Maybe WRITE-XML on object properties would address the issue?

Is there a performance issue about instantiating a lot of objects?  If so, surely that is a problem for everybody at some point?  Is it intrinsic, like the TT thing, or something unexpected compared to .ps?  Is it something that different designs can minimize?  Is there some coding practice which makes or breaks the problem?  These are answerable and interesting questions, but it can be very hard to get attention on them.

Posted by Admin on 02-Mar-2010 23:57

Fine. Then PSC should make this clear, and stop telling to use OOABL in the same way we use other OO languages.

I think that's ridiculous to say. OOABL is different and needs to be used different.

Some key differences:

- The lack of the language in some areas

- The fact that widgets and database objects aren't real objects that can't be inherited from

- The whole procedural legacy

Posted by ojfoggin on 03-Mar-2010 03:48

Thanks for that!

I'll have a read through!

Posted by guilmori on 03-Mar-2010 07:23

And these are differences we should be proud and happy to work with ?

Maybe you could try to list strengths that ABL has in its OO implementation versus others ?

Is replacing collection of instances by a single instance containing temp-table rows of data is a new revolution in the OO world ?

Posted by Admin on 03-Mar-2010 07:42

And these are differences we should be proud and happy to work with ?

I was speaking of differences one needs to be aware of. Differences that probably will sill by there for a while. Differences that make it questionable for me to adopt ANY OO design pattern to OOABL designs.

Maybe you could try to list strengths that ABL has in its OO implementation versus others ?

I don't see OOABL on it's own. I see OOABL in the context of the whole ABL - including the procedural stuff. And the strength of the ABL is the ease and efficiency of working with relational data. I personally don't want to miss that. At any layer.

I accept many potential weaknesses in other fields because of that.

Posted by Thomas Mercer-Hursh on 03-Mar-2010 11:07

- The lack of the language in some areas

This was a lot more of a consideration in 10.2A than it is now.  Yes, there are still things on my wish list, but, by the same token, there are things in Java 5 that were not in Java 1, but it didn't stop people from writing applications back then.

- The fact that  widgets and database objects aren't real objects that can't be inherited  from

That is unfortunate and something I have been regretting for 15 years, if only because of the keyword polution it has produced.  But, I think it is mostly an issue with the UI and I think we have lots of choices for that now.  Are many people really going to be writing new applications with the ABL GUI?

- The whole procedural legacy

Given an existing application, yes, one can't push a button and make it suddenly OO any more than one can make any other major architectural transition.  But, there is no need for it to contaminate new code.

Posted by Thomas Mercer-Hursh on 03-Mar-2010 11:23

Most 3GL OO applications store their data in relational databases and yet find it beneficial to map that to objects for doing actual processing.  What is it about OOABL that is different which means we shouldn't be doing the same thing.

Now, I understand that there are areas like reporting where one wants to massage a large body of data at one time and a TT or PDS is a good vehicle for that.  I have talked in the past about a possible need for "set" objects containing such data.  This is not unlike what one might do in another OO language with an array of struct or a matrix.  But, these seem to me to be cases where the data in the TT or PDS is not a holder for data which will be used in BEs, but rather where the object containing the TT or PDS is itself the lowest reducible entity.

Some PDS-based patterns operate directly on the data in the PDS.  M-S-E virtuously avoids doing this by only operating on the data in the context of a BE, thus more closely conforming to the OO approach even though the BE is referencing back to the data in the PDS under the skin.  Are you suggesting that this effort at OO conformance is misplaced?

Posted by Shelley Chase on 03-Mar-2010 14:17

Hi Oliver,

I am the Architect that designed OOABL so hopefully I can give you some answers. It seems that your original question has sparked into many different directions but I will focus on your first question.

I come from a purely OO background (C++, Java, C# mainly). When I joined OpenEdge 14 years ago, the language was procedural and late-bound any many of our customers wrote modular code with proper reuse and encapsulation. While the late bound nature of the language allowed extreme flexibility it also had the potential for many runtime errors. Customers also reported that the procedure super stack was often hard to debug and at times resulted in unexpected behavior.

Along comes OOABL. The key goal of OOABL was to provide standard OO syntax and behavior that could be "easily" used by object-oriented developers and would work along side procedural ABL. OOABL was purposely not made as a replacement for procedural ABL but as a technology that could be used with procedures. We are fully committed to both OOABL and ABL. There are still some key items on our OO roadmap that have not been completed and we are prioritizing those along with all other new ABL features.

ABL has had its success through a powerful language which can be used inside of procedures and classes. The data centric nature of the language, its built-in objects such as ProDataSets and automatic transactional scoping are some of ABL's key strengths. One line of ABL code results in many actions under the covers. Language features such as the ability to FOR EACH through data has been in ABL many years and is one of the true strengths of the ABL. Nothing will be faster that using those built-in objects directly. The model you are describing is certainly achievable in OOABL.

As far as performance, we are always looking for ways to improve the performance of all aspects of our language and have engineers dedicated to performance. We have information from our own performance testing as well as customer reported issues such as some of the ones mentioned in this thread.

If you have any futher questions, feel free to ask.

Thank you.

-Shelley Chase

Posted by Thomas Mercer-Hursh on 03-Mar-2010 14:48

There are still some key items on our OO roadmap that have not been  completed and we are prioritizing those along with all other new ABL  features.

I don't suppose you would like to share some of what we can look forward to in 11.0A?

As far as performance, we are always looking for ways to improve the  performance of all aspects of our language and have engineers dedicated  to performance. We have information from our own performance testing as  well as customer reported issues such as some of the ones mentioned in  this thread.

It would be very interesting if we could know what performance problems PSC has identified so that we know at least that you have noticed.  If these are associated with qualifications and remedies, like the TMTT problem, that would also be good to document.  Likewise, knowing that you were working on a particular problem and had some idea how to fix it would be very valuable information.

Posted by cwills on 03-Mar-2010 17:22

>Who told you to unthinkingly use OOABL exactly the same way as other OO languages? And even if this were the case, why blindly follow any such direction? I'm sure you are capable of independent thought. And responsible for your own decisions.

I think Guillaume is referring to using OO design patterns - which are supposed to be general enough to be independent of the implementation language.

As someone mentioned earlier, when you search for a particular OO design pattern on Wikipedia you will usually see numerous example implementations of the pattern in different OO languages.

This also applies Data Access & ORM design patterns (Active Record, Repository, etc).

Some of these design patterns do not apply very well to OOABL, either due to limitations in OOABL syntax or its performance.

Posted by Phillip Magnay on 03-Mar-2010 17:52

As you mention, these design patterns are independent of implementation (or at least should be). These implementations may - and probably will - vary from language to language including the OOABL. Sometimes that variation may be quite minor between two given languages for one design pattern. Perhaps that variation will be more significant in another case.

The key step is figuring out the specific OOABL implementations for these design patterns, and identifying which are troublesome or force significant variation. Some of these OOABL implementations will be straight-forward and will look very similar to the respective implementations in other languages. Others may not be so straight-forward for reasons such as syntax limitations and/or efficiency/performance as you point out. This is the reality. And the exact reason why I pointed that you shouldn't unthinkingly use OOABL the same way as other OO languages. I just think we need to recognize and accept this reality, and formulate OOABL-specific solutions rather than just wishing that OOABL was simply the same as other OO languages. It's not. So let's move on.

Posted by Thomas Mercer-Hursh on 03-Mar-2010 18:40

I just think we need to recognize and accept this reality, and  formulate OOABL-specific solutions rather than just wishing that OOABL  was simply the same as other OO languages. It's not. So let's move on.

I'm all for doing what we can with what we have.  I'm even in favor of figuring out a work-around and getting on with the work (e.g., my 10.1A fake for a singleton).  I am especially in favor of identifying ways in which we can do better than other languages because of ABL features.

But, I also believe that we should recognize when something isn't right and prioritize fixing it.  This can mean different things.

E.g., for the TMTT problem, I think there is now a pretty good understanding about what causes it, what one can do to make it better, and what kind of limits are relevant for it becoming a problem.  That's all good.  There *might* be an avenue to improve things throguh lazy instantiation, but I don't think that will help most people since I think they are probably using most tables they define.  There might be an avenue to improve things with some new parameters, e.g., preallocating DBI space.  And, there might be a possibility of revitalizing work-tables for some purposes, recognizing that not everything needs an index.

If there are language limitations which are preventing people from using traditional OO patterns, let's figure out what those are and give people an opportunity to voice their interest.  E.g., I'd vote for true generics in the OOABL because I think they can be very handy for frameworks, although I would tend to discourage them in regular code.  Mostly, I think that language features have been coming along at a reasonable pace and a lot of what I once asked for I now have.  Collection and maps seems like a prime area, although it might be that simply adding PLO support in work-tables is all that we really need.  Then we could use a TT when order mattered or a WT when it didn't and have something lighter weight.

If there a performance problems that are preventing people from using traditional OO patterns, let's figure out why those problems exist.  Is it something fundemental which we are likely not to overcome or is it something that just needs paying attention to.  It doesn't seem like it should be that difficult to do a little testing and discover what those problems are.  E.g., is it significantly more expensive to instantiate a class than a .p?  If so, then I would say there was something wrong that needed fixing.

Posted by Shelley Chase on 03-Mar-2010 18:54

We are still in the process of prioritizing features for 11.0. At the top of the OOABL list are interface inheritance and dynamic invocation of properties. Other items on the roadmap (in no particular order) full reflection, running a .cls file on startup, remote objects, and shadowing of data members. These are being prioritized along with multi-tenancy and RIA features.

As far as the performance issues, I do not have the details but I do know that we have engineers dedicated to performance benchmarking and improvements.

-Shelley

Posted by Admin on 03-Mar-2010 23:32

Some of these design patterns do not apply very well to OOABL, either due to

limitations in OOABL syntax or its performance.

... or because they seem to limit the access to a lot of the power of the language. The ABL is a language to work with relational structures. And it does that very, very well.

Posted by Phillip Magnay on 03-Mar-2010 23:37

Which specific design patterns have you determined to have this issue?

Sent from a traveling Blackberry...

Posted by Admin on 03-Mar-2010 23:37

I just think we need to recognize and accept this reality, and formulate

OOABL-specific solutions rather than just wishing that OOABL was simply

the same as other OO languages. It's not. So let's move on.

And I think it's also good to recognize some of these differences (certainly not the performance issues or the language limitations) as something very, very powerful that would get lost if blindly applying every OO pattern that fits well to other languages..

Posted by Admin on 03-Mar-2010 23:47

Which specific design patterns have you determined to have this issue?

Obviously all those where using a ProDataset and other relational constructs like Query against temp-tables in the business logic seems to become a red flag for some people around here.

Posted by guilmori on 04-Mar-2010 07:36

pmagnay wrote:

Which specific design patterns have you determined to have this issue?

Martin Fowler's Domain Model(116), in conjunction with Data Mapper(165).

The M-S-E model seems more to use Table Module(125) approach.

Posted by Phillip Magnay on 04-Mar-2010 07:42

M-S-E provides a complete domain model of businesses entities and sets of business entities, separate to yet mapped from the underlying data model. It is not anywhere close to Table Module.

Sent from a traveling Blackberry...

Posted by guilmori on 04-Mar-2010 07:44

Hello Shelley,

No built-in library of lightweight collection classes in the pipe ?

Do you expect us to use temp-tables to hold a list of instances?

Posted by guilmori on 04-Mar-2010 07:55

pmagnay wrote:

M-S-E provides a complete domain model of businesses entities and sets of business entities, separate to yet mapped from the underlying data model. It is not anywhere close to Table Module.

Ok great.

Now, seriously, can't you give us an idea when it's going to be published ?

Posted by Phillip Magnay on 04-Mar-2010 08:12

We are currently using it on a number of consulting projects. It will not be freely available until after those projects are completed.

Sent from a traveling Blackberry...

Posted by Phillip Magnay on 04-Mar-2010 08:12

And this is part of my larger point.

Both OO skeptics and the OO purists are only using half the language.

The OO skeptics are rejecting the power of the OO side of the language because they believe that using OO means they'll lose the power of the relational part of the language. It doesn't need to mean that at all.

The OO purists are rejecting the power of the relational side of the language because it doesn't pass their personal OO purity tests. OOABL is not Java or C#, and nor does it need to be.

I want to use the power of both OO and relational sides of the language without compromising the power of one side over the other. We've shown that this combination is not only possible but greater than the sum of the two sides.

Sent from a traveling Blackberry...

Posted by Admin on 04-Mar-2010 08:27

Phil - I'm 100% with you!

I'm not OO sceptic. I'm heavily using it myself for almost every new bit of code. I'm just OO-puristic-sceptic.

Posted by Mike Ormerod on 04-Mar-2010 08:29

I think it's important not to loose sight of what it is people are trying to achieve, which at the end of the day is a business solution to a business problem.  As already mentioned a couple of times during this thread, the ABL is one of a few, if not the only, language that allows you to mix & match the worlds of procedural & OO therefore allowing you to choose the appropriate approach & techniques for the business problem in hand.  I find it interesting that people refer to OOABL, there technically is no such thing, there is the ABL that as I say allows you the choice of an approach.  So whilst it's interesting to have what can sometimes border on the theoretical, the main point is to solve the business problem in the best & most efficient way, be that via OO techniques, Procedural or a mix.

In relation to the comments about samples/examples etc, I wanted to let you know that we are currently working on a project to bring some new OO based reference materials that follow the OERA and that will be accompanied by an extension to the existing AutoEdge story.  The plan is to introduce the Vendor/Supplier side of the business case as both a standalone deliverable, as well as something that can be easily integrated with the existing Test Drive use-cases.  We're close to wrapping up some of the initial design work, so as soon as we have it in a state we can share, I'll let you know.  But the aim of the project, just as with the original AutoEdge is the all the design work, code, etc will be fully available for dissection, discussion, (hopefully use) and no doubt occasional point of disagreement  

Posted by Thomas Mercer-Hursh on 04-Mar-2010 11:20

What OO design pattern limits one's ability to work with data any way that is appropriate?

You make it sound as if you think that relational is the natural form of data and that one is twisting it to represent it in an OO way.  Consider a collection of entities which are of two or more subtypes.  To properly represent that relationally, i.e., in a normalized fashion, require one subtable per subtype that has its own properties and a type indicator to tell which one to use (one might avoid the type indicator, but it is almost always used).  Compare that to a collection in which each entity simply *is* its subtype.  The relational model is more natural?

We ABLers are just used to looking at things in relational terms because RDBMS have established themselves as the de facto standard for *storing* data, not because it is the best way to *process* data.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 11:43

And I think it's also good to recognize some of these differences  (certainly not the performance issues or the language limitations) as  something very, very powerful that would get lost if blindly applying  every OO pattern that fits well to other languages.

Blindly applying is not something one should ever do with any pattern.  The whole idea is contrary to the entire pattern movement.

Let's try to illuminate these supposed advantages.  I'll start out by agreeing that there are some very cool things that one can do in ABL at the point of interface to the database.  I am gleeful not to have to do all that in SQL, although there are also times when I wish there was a SQL connection object that one could use for certain kinds of queries so that one could get the benefit of the optimizer.  Better yet, give us the run time optimization and table scans in ABL and I might not even want that any more.

But, let's consider data in the BL.  First, a large amount of BL data consists of single entities, not sets.  I see no way that one can claim a relational advantage for a single entity.  Of the BL data which does consist of sets of multiple entities, it seems likely to me that these can be divided into a couple of different types based on usage.

One type is reference data, e.g., a list of the valid country codes.  It seems reasonable to me to cache these in a TT in a validation object.  That seems to me to be a case where one can use a relational implementation which has no relational appearance from the outside because one is never dealing with any of the contents as an object.  This is really just a sort of enum which comes from a persistent store.

Another type is a set on which one is going to perform some work, i.e., each entity is going to be treated as an object in the processing.  If we put that data into a TT, then we have some choices.  One choice is that we leave it inside that object and provide the object with behavior which relates to the instance, i.e., do something like point to the "current" line.  That's not an approach I like very much because it doesn't present an OO-like interface to the rest of the system.  I think it also leads to mushing together of instance and set logic.  Another choice is to have the data in the TT, but to create BE instances when one is working on the line.  M-S-E does this.  But, one has a dilemma here.  If one actually moves the data into the BE, then the data in the BE and the data in the TT can be out of sync.  If one leaves the data in the TT, then one has to give the BE a way to get at the data.  Solutions have been buffers (which have the risk of navigating off the intended record), accessor methods (which violate normalization), and the new delegate object approach used in the latest version of M-S-E (which seems the best of the lot).  But in all cases we have the data and logic in separate places and the solutions get more complex when we need multiple tables to support subtypes.

Obviously, there are workable solutions here because people are creating production systems, but is it really an advantage?   It might be a plus to a traditional ABLer because it is more familiar, but that doesn't make it better.  We have claims of performance advantages, but so far I'm not getting much illumination about what that performance advantage is actually about.

Posted by Phillip Magnay on 04-Mar-2010 11:50

tamhas wrote:

You make it sound as if you think that relational is the natural form of data and that one is twisting it to represent it in an OO way.  Consider a collection of entities which are of two or more subtypes.  To properly represent that relationally, i.e., in a normalized fashion, require one subtable per subtype that has its own properties and a type indicator to tell which one to use (one might avoid the type indicator, but it is almost always used).  Compare that to a collection in which each entity simply *is* its subtype.  The relational model is more natural?

We ABLers are just used to looking at things in relational terms because RDBMS have established themselves as the de facto standard for *storing* data, not because it is the best way to *process* data.

And ABLers are simply not going leave behind the relational side of the language without very clear and solid proof that this purist OO approach is indeed a better way to process data. If you found a pure OO approach that is a better way to process data, then show it.  Until then, you're not going to persuade anybody.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 11:50

Guillaume, can you clarify what the problem is here?  I.e., it tends to go without saying that using RDB structures in the BL is likely to be more like table module, especially if one takes the DB schema as given.  But, what actual problems do you have with doing something else?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 11:57

Possibly, we should pull a collection object discussion out into another thread, such as my "in search of" thread here http://communities.progress.com/pcom/thread/20693?tstart=0

But, briefly, I don't know that we need PSC to give us collections per se.  Collection classes in Java are just Java code, not some special built-in.  I think there are only two problems with us writing and using our own collection classes.  One is the lack of ABL generics which means that we have to create a lot of type-specific code.  The other is that TTs are kind of heavyweight entities for anything other than cases where one needs a map class.  For simpler collections, I thnk a work-table might just do the trick, but we need support for PLO fields.

Posted by Admin on 04-Mar-2010 12:07

What OO design pattern limits one's ability to work with data any way that is appropriate?

 

I don't remember that I've ever said that any OO design pattern prevents anything!

You make it sound as if you think that relational is the natural form of data

and that one is twisting it to represent it in an OO way. 

I've always said, that in the ABL it's far more superior to do that!

Prove me wrong (by showing me code), if you want:

How efficient is it to write an GetOrderTotal () method in the Business Logic with no access to temp-tables and without getting back to the DB - because all data is in memory already. Or because it is not yet ready for persistence.

Can you beat the FOR EACH?

Even with your sub-classing scenario, the joined FOR EACH would still be faster. And in the Business Logic Layer the ProDataset could still be denormalized if that makes things easier.

How about checking if a record / object with certain patterns still exists (in validating when creating a different record / object)?

Can you beat the CAN-FIND?

And at some point all data in the Business Logic Layer needs to be presented to a user. Very often in a Browser or Grid Control. For most ABL applications that's still the normal use case.

Browsers and Grids need something you can define a Query against.

How do you define a Query widget against a collection (?) of objects?

not because it is the best way to process data.

So far nobody has even tried to convince me (by showing me code) that in the ABL the object oriented approaches are more efficient in development efforts and performance to store, process and transport data.

Posted by Admin on 04-Mar-2010 12:11

pmagnay schrieb:

tamhas wrote:

You make it sound as if you think that relational is the natural form of data and that one is twisting it to represent it in an OO way.  Consider a collection of entities which are of two or more subtypes.  To properly represent that relationally, i.e., in a normalized fashion, require one subtable per subtype that has its own properties and a type indicator to tell which one to use (one might avoid the type indicator, but it is almost always used).  Compare that to a collection in which each entity simply *is* its subtype.  The relational model is more natural?

We ABLers are just used to looking at things in relational terms because RDBMS have established themselves as the de facto standard for *storing* data, not because it is the best way to *process* data.

And ABLers are simply not going leave behind the relational side of the language without very clear and solid proof that this purist OO approach is indeed a better way to process data. If you found a pure OO approach that is a better way to process data, then show it.  Until then, you're not going to persuade anybody.

I couldn't have said that better!

Posted by guilmori on 04-Mar-2010 12:14

Sorry Thomas, I am not sure I get the question.

You ask what problems I have with M-S-E ? with Table Module ? with implementing Domain Model in ABL ? With using something else than Domain Model ?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 12:15

I'm sure that there are OO skeptics ... although they probably aren't following this forum.  I'm not sure that I have encountered any OO purists who suggesting that anyone should ignore any strength of the language.  The question is, where is any given approach the best approach?  There is a necessary relational component in the OR mapping layer, so no one is likely to object to using relational aspects of the language there.  Where there is a difference of opinion is on the use of relational elements in the BL.  Even there, you will note that I have advocated a number of uses where I find things like TTs to be perfectly appropriate and useful, so it isn't as if I in particular are trying to deny the existence of that part of the language.  My proposed SuperMap class is an example of going beyond the norm of traditional OO to do something which takes advantage of the power of the ABL.

So, I think it is inappropriate to cast the differences of opinion which are being expressed here in terms of simple skepticism or purity ... both have an unfortunate perjorative tone which we don't need and which are off the mark.  Indeed, it is clear that you are as purist as I in terms of how you want the BE and ES to inteface to client objects in a very OO way.  We just differ in the implementation which lies behind that.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 12:25

the ABL is one of a few, if not the only, language that allows you to mix & match the worlds of procedural & OO

Actually, Mike, I would disagree with this a bit.  It is quite possible to write procedural Java.  Just being structured into classes doesn't guarantee a really OO mindset.  This same risk exists in OOABL, i.e., one can write in terms of classes, gain some benefit of compile-time checking, but still be writing quite procedural model, RDB mentality code.

Understand that I think it was a good, albeit necessary, thing for PSC to make it possible to do both.  For transitioning legacy systems or even just adding a few new bits to legacy systems, it is essential.  That doesn't mean that it is necessarily a virtue to flop back and forth on a whim.

Posted by Mike Ormerod on 04-Mar-2010 12:42

OOABL, what's that??

Posted by guilmori on 04-Mar-2010 12:52

I agree with you Mike. This indeed shows that OO is lacking in many ABL areas. Oh wait, were you trying to show the power of relational ?

So we seems to have 2 opinions here:

1) OO is lacking in many areas, it should be improved --> Me

2) OO is lacking in many areas, relational is a better solution in those areas --> every other ABLers ?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 12:53

Can you beat the FOR EACH?

Maybe, maybe not.  What if the line publishes a new amount every time it changes and the order subscribes to that event so that is is continuously updated with the current total.  Then I never do a for each.  I might use more cycles than the for each, but I have gained continuous correctness.

But, the real point is not to compare a single operation and put a stopwatch to it.  If it becomes a performance problem, one may have to apply the clock and consider additional options, but one really has to look at these things in context.  I don't think a lot of people realize that there is a great deal of good OO design practice which is inherently going to reduce performance.  E.g., loose coupling of systems means that you have to encode and decode messages instead of just reaching across the boundary and doing something.  But, unless the performance hit is unacceptable a loosely coupled interface like this has on-going benefits in maintenance and ease of evolution of the systems.

Can you beat the CAN-FIND?

Meaning that you advocate putting can-find, a direct database access, in the middle of the BL?  Not very OERA of you.

But, yes, I can beat it in many cases.  E.g., for validation cases like validating a country code, one caches the legal values in a validation object and references the validation object instead of the database.  Boutnd to be faster and opens up the possibility that the codes actually come from somewhere else on the network in a foreign database.

I am paying very little attention to UI.  Whatever happens there will depend on the technology of the UI and an ABL client hasn't been my first choice in some time.  But, hey, if you need one there, you need one there.  I don't mind as long as it is actually the best way to do the UI.


So far nobody has even tried to convince me (by showing me code) that in  the ABL the object oriented approaches are more efficient in  development efforts and performance to store, process and transport  data.

Proof will have to come with experience.  But, a pretty dominant slice of the development world figured out that OO was a better way to do things quite a long time ago.  Not because it was more performant, not because it was faster to write, but because the resulting systems were stable, maintainable, and nimbly evolvable because of the natural relationship between the code and the problem space and the clear isolation of responsibilities.  As noted previously, that sometimes comes at the price of performance.

Thinking in OO terms isn't much different that thinking in SOA terms.  Indeed, one can make the argument that there is nothing new about SOA, it is just good OO spread across a network, something people have been doing long before anyone ever developed the SOA term.  Is SOA more performant than a monolithic architecture?  Well, it might be in some cases because of the ability to distribute loads and possibly provide load balancing across multiple providers, but in general for any smaller system one can expect to get better performance from monolithic code.  Does that mean that is how we should code?

Posted by Tim Kuehn on 04-Mar-2010 12:53

Tim Kuehn is absolutely convinced that deferred instantiation of TT should be a high priority because he has experienced a big problem with it.  Other people think that the architecture which caused it is the primary problem and, while it might be a nice idea, it isn't that important.

Let's be clear about this - I think LI would benefit any application (OO or otherwise) that defines TTs that aren't used, which is common in include-file type structures. If there are other things PSC can do to accomplish the same goal, all the better!

At the absolute very least, this behavior must be clearly documented so other people working under the same assumption I was don't run into this problem. I'm not the only one who'se run into this problem, and pointing the finger at the developer(s) for "not knowing" this behavior existed is not the right answer.

Now, back to your regularly scheduled religious war.

Posted by Phillip Magnay on 04-Mar-2010 12:57

You must have missed my point where I am advocating retaining both the power of relational elements and leveraging the power of OO. If you conclude that this stance is somehow purist, then the term has lost all meaning.

If you represent every other approach but yours as "not OO", or "only OO-like" or "not traditional OO", then people can be forgiven for calling you an OO purist.

But all this is beside the point. You claim that your OO approach is a better and all the others are somehow lacking. Then stop talking about it and prove it.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 13:01

Guillaume, I'm afraid that we are having difficulties due to the forum software since it is making the threads so unclear and all mushed togetther now that they are so deeply nested.

What I was trying to ask about was this:

Phil said "Which specific design patterns have you determined to have this issue?"

I think you answered him "Martin Fowler's Domain Model(116), in conjunction with Data  Mapper(165).  The M-S-E model seems more to use Table Module(125)  approach."  Perhaps you were responding to a different issue?

At any rate, what I was trying to determine was what problems you were experiencing in your own work.

Posted by guilmori on 04-Mar-2010 13:10

tamhas wrote:

Guillaume, I'm afraid that we are having difficulties due to the forum software since it is making the threads so unclear and all mushed togetther now that they are so deeply nested.

What I was trying to ask about was this:

Phil said "Which specific design patterns have you determined to have this issue?"

I think you answered him "Martin Fowler's Domain Model(116), in conjunction with Data  Mapper(165).  The M-S-E model seems more to use Table Module(125)  approach."  Perhaps you were responding to a different issue?

At any rate, what I was trying to determine was what problems you were experiencing in your own work.

PERFORMANCE (with super extra capitalization).

Posted by Admin on 04-Mar-2010 13:27

What if the line publishes a new amount every time it changes and the

order subscribes to that event so that is is continuously updated with the current total.

And you truly believe that this is as maintainable as the FOR EACH?

Meaning that you advocate putting can-find, a direct database access,

in the middle of the BL?  Not very OERA of you.

You're good at trying to make others look like an idiot or beginner, right? But that won't work here. Never used CAN-FIND against a TEMP-TABLE?

Let's use the classic Order and OrderLine dataset. Temp-Tables as you probably know, not database tables - so for many of us compatible with the BL layer. Let's say an operation of inserting a new OrderLine element (record or object) requires knowledge if an element with a certain criteria (let's say same Item code) already exists. Because in that case you don't create a new OrderLine element, you increase the quantity of the element that's already there.

You could make Item Code the key of that collections. But what if you need to search by other fields as well? Let's say Orderline Number? Or VAT code of the OrderLine? Will you create as many keys in that collection as fields in my temp-table?

Any maybe going thru 10 instances of OrderLine objects sequentially is fast enough at runtime. But is that code evenly simple than the CAN-FIND?

Quite a common business rule, I'd say. And has nothing to do with simple master or system data validation.

 

I am paying very little attention to UI. 

Others do (not only myself - otherwise I wouldn't have been in good business for the last couple of years).

Proof will have to come with experience.  But, a pretty dominant slice

of the development world figured out that OO was a better way to do things

quite a long time ago.  Not because it was more performant, not because it

was faster to write, but because the resulting systems were stable,

maintainable, and nimbly evolvable because of the natural relationship

between the code and the problem space and the clear isolation of

responsibilities.  As noted previously, that sometimes comes at the price

of performance.

 

Are you questining the future-proofness of existing ABL systems because they are not OO?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 14:11

If you conclude that this stance is somehow purist, then the term has  lost all meaning.

You seem to have missed that the remark was intended as a compliment.   I was commenting on the care and concern which you have exhibited toward providing good OO standards in M-S-E, especially in the way in which other clients interact with the BEs and ESs, but also recently in the move from accessor methods to the delegate object.

If you represent every other approach but yours as "not OO", or "only  OO-like" or "not traditional OO", then people can be forgiven for  calling you an OO purist.

In no way am I attempting to condemn or dismiss any of the possible patterns based on my perception of their adherence to OO principles.  I am interested in informing people of issues that they might want to consider.  E.g., in the earlier M-S-E pattern I was bothered by the violation of normalization which resulted in the use of accessor methods on the Model to get data into the BE.  You felt that it wasn't a serious violation because of the need for an identifier, but in the 10.2B revision, you and your team came up with the delegate approach which I think we both agree provides superior characteristics.  All I am trying to do is to inform people about these issues so that they can make informed choices.

In parallel, I am working on developing a PABLO approach.  There are lots of issues and options with that which I am still exploring.  I will be doing performance testing.  Maybe I'll find a problem I won't be able to solve, but I think it is worth the effort because I think it can be a better approach.  Some people might agree, others undoubtedly won't.  The more informed those choices are, the better off we all will be for it.

This is not a purity contest.

Posted by guilmori on 04-Mar-2010 14:17

mikefe wrote:

Let's use the classic Order and OrderLine dataset. Temp-Tables as you probably know, not database tables - so for many of us compatible with the BL layer. Let's say an operation of inserting a new OrderLine element (record or object) requires knowledge if an element with a certain criteria (let's say same Item code) already exists. Because in that case you don't create a new OrderLine element, you increase the quantity of the element that's already there.

You could make Item Code the key of that collections. But what if you need to search by other fields as well? Let's say Orderline Number? Or VAT code of the OrderLine? Will you create as many keys in that collection as fields in my temp-table?

Any maybe going thru 10 instances of OrderLine objects sequentially is fast enough at runtime. But is that code evenly simple than the CAN-FIND?

Quite a common business rule, I'd say. And has nothing to do with simple master or system data validation.

But Mike, what if we had something similar to .Net "LINQ To Object" in the ABL ? That allows you to query over a collection in any imaginable way, efficiently.

The problem I see with temp-table is that your OrderLine rows must be homogeneous. So if you have different OrderLine types, every one of them must share the same data definition, which means some fields may not be applicable/assigned in some cases.

Classes allow us to define sub types, and still query over a base interface, and all this with data definition clearly separated to where it belongs.

Moreover, how do you associate methods or private data, with a temp-table row ?

How do you make a method behave differently depending of the type of the temp-table row (ex: the criteria you mentionned are different depending on the OrderLine type)?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 14:25

OK, but performance of what exactly?  Does it take longer to instantiate a class than an equivalent .p?  What specifically is the problem?

Posted by Shelley Chase on 04-Mar-2010 14:34

We have certainly talked about ABL collections as well as generics but at this point it does not look like they will make the cut. Yes, the current solution is to use temp-tables for collections.

-Shelley

Posted by Tim Kuehn on 04-Mar-2010 14:35

Yes, the current solution is to use temp-tables for collections.

does that mean PSC is going to do something about the TMTT problem?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 14:45

And you truly believe that this is as maintainable as the FOR EACH?

Why not?  it is just one idea, but it is an idea which separates the total in the order from the amount in the line.  Thus, I can change the rules of how the amount in the line is computed and I don't have to touch the order.  Using a for each, the implementations are joined.  More over, the FOR EACH has to be run over and over and over again to be current.  In my own non-OO software, at the start of every line edit I back out the line from the total and then put it back in again when the line is complete.  Today, I would do that with a single event which provided the before and after values.  Note too that the same event could be used to trigger other behaviors.

You're good at trying to make others look like an idiot or beginner,  right?

If so, it is unintentional.

But that won't work here. Never used CAN-FIND against a TEMP-TABLE?

Isn't that what I described for my validation object?

Will you create as many keys in that collection as fields in my  temp-table?

This is the use case for my SuperMap proposal.  The other approach is that one creates one Map class for each propety on which you need to access by key.  I find that unaesthetic, which is why I have proposed SuperMap, which is, btw, an example of my deviating from typical 3GL practice to take advantage of an ABL capability.

BTW, I'm not sure that I would encounter your specific use case.  It is not uncommon to create orders with the same item on more than one line, e.g., when one is doing phased shipments.  But, consider the case of grouping lines by product type.  With a TT, you would have an index on product type and read the table matching on a value of that index to get the lines of that product type.  With a collection by product type, the collection would contain only the relevant lines and you wouldn't need an index.  With a SuperMap, access would be essentially identical to a TT.

Are you questining the future-proofness of existing ABL systems because  they are not OO?

I think OO provides a better development paradigm in providing stable, maintainable, nimble systems.  That doesn't mean that someone can't continue to provide entirely procedural applications until the end of time ... look at all the COBOL apps still running in the world.  It does mean that I think there is a better way going forward.

Posted by Phillip Magnay on 04-Mar-2010 15:00

tamhas wrote:

You seem to have missed that the remark was intended as a compliment.   I was commenting on the care and concern which you have exhibited toward providing good OO standards in M-S-E, especially in the way in which other clients interact with the BEs and ESs, but also recently in the move from accessor methods to the delegate object.

It was rather difficult to read it as a compliment when there is an objection to the use of the term as pejorative in the previous sentence.

But again, really beside the point. The original context concerned the rejection of OO in favor of relational coding versus the rejection of relational coding in favor of OO.  I'm clearly not a purist wrt this question.

tamhas wrote:

In no way am I attempting to condemn or dismiss any of the possible patterns based on my perception of their adherence to OO principles. 

People could be forgiven for concluding otherwise.

tamhas wrote:

All I am trying to do is to inform people about these issues so that they can make informed choices.

Actually, you're trying persuade people to your view, not inform.  Nothing wrong with that.  But let's at least be honest about it.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 15:03

I wondered how long it would take before you brought up LINQ!

The problem I see with temp-table is that your OrderLine rows must be  homogeneous.

One can, of course, have a PDS with additional tables for the subtype data.

The bigger problem is sorting out the subtype behavior.  I think there are two basic approaches to this with some subflavors.  One approach is that you end up with a single object which a mixture of order and order line and all the logic for all subtypes mushed in together, i.e., something that is not really any different than one would expect in a legacy .p.  Familiar yes, but certainly not taking advantage of OO generalization and delegation to sort things out into simpler, encapsulated units.  The other approach is to actually create a BE for any row that you want to work on.  If you shove the data into the BE, then you have the potential of the BE and the TT being out of sync, which seems ugly.  So, the other idea is to leave the data in the TT and give the BE a way to reach that data.  Ideas which have been tried include giving the BE a buffer pointing to the row (doesn't seem like good encapsulation and has the danger that the buffer can be navigated off the row), providing the TT container with accessor methods (violates normalization since the same data is accessible from thee BE accessors), and the delegate approach now used by M-S-E.  The delegate is a generic object which takes in a handle and a rowid and a handle and build a dynamic buffer to the indicated row.  It has accessor methods for each data type by which the BE can access the data, e.g., something like GetString("Name") to return the Name field from the buffer.  This protects the buffer from the BE programmer since the delegate is completely generic, so there is no fear of navigating off the intended record as there is no logic in the delegate to do that.  It does mean possible run-time errors for requesting non-existent fields or fields of the wrong datatype.  I guess the performance must be OK or they wouldn't be doing that.  It is the cleanest version I have seen of solving this problem, but still has its disturbing aspects, starting with the data not being in the BE in the first place.

If the BE is generated by the container for the TT (Model in M-S-E), then the Model can test any type indicators and generate a BE of an appropriate subtype and provide it with delegates for the record in the main TT and any subtables which might be appropriate.  Common practice in M-S-E is often to achieve the subtypes by creating a BE of the base class and then wrapping it with one or more classes which provide the subtype data and behavior, somewhat in the fashion of the GoF Decorator pattern.

Did I get all that right, Phil?

Posted by Thomas Mercer-Hursh on 04-Mar-2010 15:09

I'm putting in my vote to up the priority of both.  I suppose generics aren't simple, but you are at least half way there in 10.2B and I think they would be very useful for writing frameworks ... although I have some fear of the abuse they would be put to.

Collections I think you should think about very seriously since we know that it is a Bad Thing to proliferate too many temp-tables and there are a lot of collections which don't need indexing.  It seems like you could implement PLO support as a field type in work-tables pretty easily and that would give us a way to have collections with no order or simple order only, i.e., everything under the Collection hierarchy of Java and leave TTs for Maps.  That just seems like an easy thing to do and it could be very valuable.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 15:12

Lazy instantiation would only help people who define collections and don't use them.  Seems unlikely.  I think the TMTT problem in OO is more a question of actually needing a lot of them in use at the same time and most of those uses are for something that doesn't need most of what at TT is good at.  I.e., most collections at most need to be read in the order in which they were created, not index needed, and they are small so they don't need slopping to disk, and they have short lifespans so they should be cheep to create and destroy.

Posted by Tim Kuehn on 04-Mar-2010 15:37

tamhas wrote:

Lazy instantiation would only help people who define collections and don't use them.  Seems unlikely.  I think the TMTT problem in OO is more a question of actually needing a lot of them in use at the same time and most of those uses are for something that doesn't need most of what at TT is good at.  I.e., most collections at most need to be read in the order in which they were created, not index needed, and they are small so they don't need slopping to disk, and they have short lifespans so they should be cheep to create and destroy.

You've got your head stuck in OO land - there's a lot of procedural code out there, and any program that includes a set of TT definitions and doesn't use all of those TT's still winds up instantiating TT's that won't be used.  LI would eliminate that wasted overhead. (Or - optimize the needless TT instantiation out at the compiler level).

In terms of OO coding the TMTT problem has already happened due to design implementations by people who have no idea about this problem until they're way down the road to implementing their design. And this problem will continue to happen - protestations about encapsulation, alternate architectures, etc. not-with-standing.

If PSC implements some way to do collections without risking running into the TMTT problem - that'll be outstanding. Until then, it's a danger that must be communicated to the developer community so they can plan, design, and implement accordingly.

Posted by Thomas Mercer-Hursh on 04-Mar-2010 16:24

I'm clearly not a purist wrt this question.

Not stated in those terms, no, but I don't think there is much question that the quality of the M-S-E implementation derives in large part from paying a lot of attention to OO principles and standards and, in that sense, you are more rigorous than some of the others who are advocating PDS-based patterns.

Actually, you're trying persuade people to your view, not inform.   Nothing wrong with that.  But let's at least be honest about it.

I am, actually. If I were solely interested in persuading people to my point of view, I wouldn't be encouraging you to publish your point of view when it differs with mine.  To be sure, there are some realms where I think my view is well thought out and I will argue that position over others, but also rather regularly someone will point out something new that I hadn't considered previously and my view will get altered to accomodate that new point of view.

Many years ago when I was teaching introductory Biological Anthropology for the first time, I tried very hard to present the spectrum of views in the profession on those issues where I knew there was a divergence of opinion.  I did that exactly because I knew that my own position on some of these topics was rather out at the fringe and I didn't think it was appropriate to introduce those new to the field with views I acknowledge were extreme and have them think they were mainstream.  Basically, I failed, and all too many students came out of the class thinking that my view were mainstream, which surprised some of their later professors.  The next time I taught that class I taught it as a history of science, conecting each viewpoint to its place in time, country, religion, economic circumstance, etc.  Sure enough, there were some students who didn't make it much past 1935 in their own opinions ... which is about where the mainstream of thought on fossil man was at the time, but the better students were with me all the way to the end and told me later that they felt prepared for new discoveries in the field.  One became President of the American Association of Physical Anthropologists, so I guess he was prepared, but I can't take all the credit for that.  At the end of the class, the best students surrounded me and demanded that I tell them where I stood personally ... I'd actually gotten through the whole course without them figuring that out!

Posted by Thomas Mercer-Hursh on 04-Mar-2010 16:32

You've got your head stuck in OO land

This *is* a thread on the OO forum ...

Clearly, there are some possible uses for or usage patterns with temp-tables which are impractical.  Lazy instantiation might help some of those, but there are a lot it won't help because the TTs are actually used.  There might be other things, like instance parameters which could improve things, but I think one also has to recognize that a TT is not that lightweight an entity with its support for multiple indexing, slop to disk , serialization, etc.  So, I think we need to look at why people are wanting to use TTs and seeing if there isn't something else that will do the job for at least some cases.  Something like a PLO field in a work-table woul handle some -- no index, no slop to disk, no serialization, no before image.  Providing serialzation methods for objects might handle another use case.

Posted by Tim Kuehn on 04-Mar-2010 16:40

tamhas wrote:

You've got your head stuck in OO land

This *is* a thread on the OO forum ...

True, however one has to remember there's a LOT of pre-OO code out there that something LI or compiler optimization could help with.

Posted by Admin on 04-Mar-2010 16:57

But Mike, what if we had something similar to .Net "LINQ To Object"

in the ABL ? That allows you to query over a collection in any

imaginable way, efficiently.

 

Conjunctive! That's wishfull thinking. My clients requests systems that work well today. Have you seen anything like LINQ on Shelley's list of OO features for 11.0? I didn't! The original post here suggested that the original poster was seeking a solution now - not for OE12.

I'll discuss the future of thre language with PSC in 4 weeks. This thread is about today.

Future releases may potentially change that.

Posted by Admin on 04-Mar-2010 17:02

Yes, the current solution is to use temp-tables for collections.

 

does that mean PSC is going to do something about the TMTT problem?

In previous discussions on that subject I have proven that with 10.1C already you can do something against the TMTT issue by using a static temp-table in a collection base class. I will hold an owner reference as well as an instance reference.

It breaks encapsulation at on place in the system for the purpose of solving the TMTT issue.

Posted by guilmori on 05-Mar-2010 10:46

The performance issues we have is with:
-class instantiation
-method calls and property getters
-too many tt problem

I do not think these are any slower if compared to procedural, ie: new vs run or method vs internal procedure calls.

It is more about the way it is being used, which I think require more efficiency than in he past. Things like creating class instance instead of tt row, accessing data through property instead of tt field, isolating responsibility(delegation, encapsulation) by doing much smaller classes and smaller methods, inheritance which mean more than one class to run for a single instance. All this adds up quickly into a very noticeable performance overhead.

We did some comparison to C#, some say this is ridiculous, ABL cannot be compared to anything, but isn't a 400X overhead in ABL ridiculous ?
Another comparison, no .Net involved:
  1- A query is encapsulated inside a class. Iterate through the query using class method. Access buffer fields using class property.
  2- Iterate through the query with direct access to query handle. Access buffer fields with direct access to buffer handle.
#1 was 10X slower.

I can send you me test code offline if you wish.

Posted by Thomas Mercer-Hursh on 05-Mar-2010 11:24

Sure, I would be interested in test code.

Some of this sounds like it could be a question of decomposition of the problem space into too small a units.  Yes, I know that there is a general sense of wanting small objects, but if you make them too small, one ends up with an excessive number of object to object interactions.  I think the goal needs to be balance so that as much as possible happens within the object and as little as possible between objects.

As for the comparison to .NET, I don't think that you can expect interclass invocations to be any faster than .p invocations.  If a .p run or IP run is too slow for you, then you are simply in the wrong language.  There are applications for which that would be true, but I think for most business applications it is not true.

What usage pattern is giving you the TMTT problem?

Posted by Admin on 05-Mar-2010 11:32

I can send you me test code offline if you wish.

Yes, please. mike@fechner.de

Posted by Evan Bleicher on 05-Mar-2010 13:38

guilmori wrote:

I can send you me test code offline if you wish.


Hi Guillaume:

     Can you send me your test code?  bleicher@progress.com

Thanks

Evan

Posted by cwills on 06-Mar-2010 04:22

tamhas wrote:

Actually, Mike, I would disagree with this a bit.  It is quite possible to write procedural Java.  

Thomas, would it be fair to say that you like arguing for the sake of arguing? (I expect an argumentative response to this question )

Not to say that you don't sometimes have valid points, but the frequency, style and length of your posts cheapens them (Its kind of like the boy who cried wolf).

Posted by ojfoggin on 06-Mar-2010 04:27

I definitely think he has a point though about being able to write procedurally in any language.

On another forum of which I am a member there was a request for some help with some Java.  There were a few discussions about the benefit of OOP.

The guy then posted his code for the darts game (command line game) he was writing.  It included one class called Game.  In which there was a constructor method with the entire game played out in it.

It could have been written in BASIC.  It took us a while but we managed to manipulate the code into some kind of OOP structure.

But yes, it is very possible to write procedurally in almost any language.

Posted by Thomas Mercer-Hursh on 06-Mar-2010 11:12

It isn't that I like to argue as much as I like to teach people to think ... that, and that I dislike letting things go which I think might mislead people.

Perhaps what I have to say could be said in fewer words ... but it would take both I and the other side to coordinate that presentation so that it would be efficient.

BTW, one of the reasons the procedural Java thing is very much in my mind is because I have paid a lot of attention over the last couple of years to the four companies that are offering services to translate ABL to Java or C#.  The result of such translations is only nominally packaged in OO form, but is as procedural as the code it was taken from.  Good OO happens because of clear concept, not by using OO forms.

Posted by Admin on 06-Mar-2010 15:47

It isn't that I like to argue as much as I like to teach people to think ...

You think people around here need training in thinking???

Posted by Thomas Mercer-Hursh on 06-Mar-2010 16:00

You think people around here need training in thinking???

Most people do, self included, especially when they are starting to work in a new area.

Despite having started with OO in a serious way back about 1995, I've had to do a lot of hard thinking the last year or two about how all of it applies in an ABL context and how to do it best.  I've gotten some good mentoring in that from outside the ABL world for which I am very grateful.

This thread is closed