OO Wish List

Posted by Thomas Mercer-Hursh on 17-Oct-2006 11:35

Someone beat me to the first post in the new forum, but let me draw your attention to my OO Wish List at http://oehive.org/OOWishList

All Replies

Posted by gus on 18-Oct-2006 14:09

Nice list. Three comments and two questions:

4) It would be more useful if you prioritised the items.

3) If you could have any 3 items on your list, but only 3, which ones would you choose, and why?

2) Some of the items have nothing to do with oo4gl and should perhaps be on a separate list

1) Some interesting languages to study are: Scheme, Dylan, Erlang, Lua, and Fortress.

0) Why isn't closures on your list?

-gus

Posted by Thomas Mercer-Hursh on 18-Oct-2006 14:20

4) It would be more useful if you prioritised the items.

Eventually, I might, but I was thinking to do that more by building business cases for each item that would convey its importance and then to combine that with its difficulty of implementation to give an overall sense of where it should come in the priority queue.

3) If you could have any 3 items on your list, but only 3, which ones would you choose, and why?

I'm sure this changes depending on what I am working on. E.g., I recently tried to apply a certain beta language feature to my previously published collection classes, expecting that it would help me make them simpler and easier to use, but then realized that the new feature was only a piece of what I needed and that I needed either generics or object versions of primitives in order to really do the kind of transformation I was hoping for. Consequently, right at the moment, those are two hot buttons.

2) Some of the items have nothing to do with oo4gl and should perhaps be on a separate list

Which do you mean? I think that there are some that might appear on more than one list, but they are on this list also because they relate to my OO development work.

0) Why isn't closures on your list?

Please post a comment to add it.

Posted by Admin on 19-Oct-2006 01:35

I'm sure this changes depending on what I am working

on. E.g., I recently tried to apply a certain beta

language feature to my previously published

collection classes,

Since you promoted the hive-site very well, I can see a reference in almost every forum, I took a brief look at your collection classes. I don't want to argue about implementation details, but due to the nature of the 4, ehh, ABL, you have implemented the collections via temp-tables. Deleting an item in the "virtual array" means renumbering the underlying temp-table. Performance is not the first priority when prototyping, but do you think this approach is feasible? I mean, you store object handles, when will these objects be disposed, since the ABL has no garbage collection?

Due to problems (missing GC) like this I really question if OOABL is suitable for a model driven architecture: a data driven architecture, where temp-tables flow throw objects, seems like a more natural approach for the ABL. Don't get me wrong, with "more natural" I mean the target language the ABL, not my preference or the architect's preference.

Back to your OO-wish list: when you want to say goodbye to temp-tables & datasets and you want to represent entities as class instances, I can understand your wish list. Perhaps a question to Gus: do you think this is the way to go... It would have been nice if we could have asked Mary, our dynamic temp-table, query & prodataset hero, as well

Posted by Thomas Mercer-Hursh on 19-Oct-2006 11:42

I don't want to argue about implementation details,

Feedback is always welcome. That is why this is a .01 release. You will see lots of things in the doc that aren't addressed yet. Not to mention how much prettier one could make them with method overloading and either generics or class equivalents of primitives. I don't pretend that this is anything other than a first cut to get things moving.

Deleting an item in the "virtual array" means renumbering the underlying temp-table.

Only if you are merely keeping a sequential collection. Any time you are using a map, which I expect to be 99% of the cases, then no renumbering is required. I also think that in most cases the collection is the result of a query and one does something with it, but there is no significant amount of addition and creation of entries. Happens, of course, but mostly in sets that aren't very large.

As to garbage collection, like anything else the cleanup has to be in the place where the object is created.

I think you are going to have to explain your point about MDA ... I don't get it.

I am very much oriented toward an OO4GL, not wanting to merely reinvent all the things that happen in an OO3GL. I see no reason to get rid of power.

Posted by Admin on 19-Oct-2006 13:22

I think you are going to have to explain your point

about MDA ... I don't get it.

There is the data centric architecture on one side. You define messages, documents, datasets, you name it on one side and a operations on the other side. You create the data structure and hand it over to a method (service) which processes the data. The service delegates it's work to other services, passing the data down the line. Basically this is the way we work with ABL (and it's a way to work in .Net using datasets). A 4GL temp-table is disconnected from it's behavior by default: you can change a value in a buffer without any logic guarding it.

There is the domain model way of doing things on the other side: you have classes that expose properties and operations. Object relational mapping is popular in this arena, since persistence is considered to be just another aspect of the code. In this environment the "data transfer object" is used to cross network boundaries.

As far as I have interpreted your forum messages, you want to create OOABL classes, that behave like the domain model. This means you will hide temp-tables and buffers and you will expose entity class instances instead. So for every order you will have an Order-instance that exposes attributes like "OrderId", "OrderDate". etc. It will also expose an orderlines collection. Now I'm wondering if the OOABL is suited to work like this...

Posted by Thomas Mercer-Hursh on 19-Oct-2006 15:10

I think you must be reading things into my posts that I am not aware of. To me, if one is working in an OO paradigm, then everything that is not directly interacting with the data source works in terms of objects. What is in that object and how it behaves is according to how one designs it, not how it is stored. Object relational mapping happens only at the interface and then only if the data source is a relational DB (it might be XML over the wire, for example).

Now I'm wondering if the OOABL is suited to work like this...

And, why not? I suppose that the existence of a wish list is an acknowledgement that I would like more complete OO support than I have so far, but that doesn't mean that one can't do real work. So far, I have run into places where I had to do a workaround, e.g., the one I did for singletons, and I have run into places where the implementation was less elegant than one might wish, e.g., collection classes, but not instances where one can't get the job done. I don't pretend to have everything figured out yet, but I'm working on it ...

Posted by Admin on 20-Oct-2006 02:26

I think you must be reading things into my posts that

I am not aware of. To me, if one is working in an OO

paradigm, then everything that is not directly

interacting with the data source works in terms of

objects. What is in that object and how it behaves is

according to how one designs it, not how it is

stored.

In theory this is all nice, but in reality it's much harder. When you reach the UI-tier, it's all data again you bind to. Sure, you can bind to plain .Net objects in .Net 2.0 nowadays, but for an ABL-widget it's not possible to bind to an OOABL-instance (as far as I can see). And once you're mixing ABL-logic with .Net UI, you will be marshalling datasets, so that's disconnected data as well (disconnected from business logic

Posted by Thomas Mercer-Hursh on 20-Oct-2006 12:06

[i[for an ABL-widget it's not possible to bind to an OOABL-instance

Well, I'm not sure what you are saying here. If the UI is being presented by an ABL client, then the UI is defined within a class for that purpose. If the UI is being presented by something else, then there is an interface to the something else and the nature of that interface will depend on the something else. I'm not sure what you are trying to bind to what and not succeeding ( unsuccessful bondage? ).

Posted by Admin on 20-Oct-2006 12:43

( unsuccessful bondage? ).

I give up... and I go back to the C# world I have been working in for the last couple of years. Bye, bye!

Posted by Thomas Mercer-Hursh on 20-Oct-2006 12:51

Don't leave ... I'm sure that you have a point which is clear to you, it just isn't clear to me.

Posted by Phillip Magnay on 20-Oct-2006 14:38

Don't leave ... I'm sure that you have a point which

is clear to you, it just isn't clear to me.

Yes. Please come back when you can anytime. We really need more experienced and thoughtful OO people to share their views here with the community, not less.

Phil

Posted by Tim Kuehn on 20-Oct-2006 15:07

[i[for an ABL-widget it's not possible to bind to an

OOABL-instance

Well, I'm not sure what you are saying here. If the

UI is being presented by an ABL client, then the UI

is defined within a class for that purpose. If the

UI is being presented by something else, then there

is an interface to the something else and the nature

of that interface will depend on the something else.

I'm not sure what you are trying to bind to what and

not succeeding ( unsuccessful bondage? ).

I'm thinking he's saying that when using non-ABL UI's, ther'es no OOABL objects to "attach to" and have the UI to interact with - it's all data being marshalled across the wire.

Effectively, you'll need to either duplicate the OO BL on the other side, or have lots of chat between the UI and the backend as the user does their thing.

Posted by Thomas Mercer-Hursh on 20-Oct-2006 15:28

I'm thinking he's saying that when using non-ABL UI's, ther'es no OOABL objects to "attach to" and have the UI to interact with - it's all data being marshalled across the wire.

While three is certainly an issue about what can be sent across the wire, the specifc piece I was responding to was for an ABL-widget it's not possible to bind to an OOABL-instance. To me, that sounds like a ABL widget, presumably in the context of an ABL session, and an ABL domain object instance ... i.e., no non-ABL components involveds.

It seems to me that we have several possible variations here and I'm not sure what the issue is that is being raised.

If we have an all ABL solution, then the ABL widgets are within a class, but one would expect this class to be distinct from the domain class per MVC principles.

If there is a non-ABL UI, then obviously those UI widgets can't bind directly to the ABL domain object, but then one wouldn't want them to per MVC principles. One could send the UI object a copy of the data in the form of a PDS ... and that certainly wouldn't have all of the BL associated with the domain object, but I thought we agreed that this separation is what one should do per MVC principles.

See http://www.psdn.com/library/thread.jspa?threadID=2211&tstart=0 for a discussion of MVC, including both the basic version and the higher level of separation which I was advocating there.

I.e., OK so either one can or one can't bind the widget to the domain object, but why is this a problem since one shouldn't.

Effectively, you'll need to either duplicate the OO BL on the other side, or have lots of chat between the UI and the backend as the user does their thing.

Some BL probably does belong in the UI, e.g., stuff at the level of "this field must contain a valid integer". I don't see how one can completely avoid this without either violating MVC or having a really annoying and deadly slow interface. But, the "interesting" parts of the BL don't belong in the UI. Some of them require interaction with other domain objects and some possibly can't be really validated without reference to the database. One certainly doesn't want that in the UI! So, one strikes a balance between insuring that one has a superficially valid set of data at the UI level and relying on the back end as to whether this is completely acceptable or not.

A simple classic example is multiple order entry sessions which obtain information that a limited number of a certain item are still available. But, both sessions are working independently and neither has committed its work. Then, one session commits and now those items are no longer available. If we had inter-session pub-sub, one could imagine the domain object signalling the second session of the need of a refresh, but even then there is a certain risk of the second session attempting to commit and only then it is discovered that the basis on which that commit seemed reasonable was stale and the commit must be rejected.

Posted by john on 25-Oct-2006 15:58

To me, if one is working in an OO

paradigm, then everything that is not directly

interacting with the data source works in terms of

objects. What is in that object and how it behaves is

according to how one designs it, not how it is

stored. Object relational mapping happens only at

the interface and then only if the data source is a

relational DB (it might be XML over the wire, for

example).

There are a number of interesting and different points being discussed in this thread, but let me try to put in a thought on one or two of them. One is that, at the risk of being simplistic, one of the benefits of the class-based extensions to ABL that we have been striving for is to let you be "as OO as you wanna be". You don't need to be having a spritual object-oriented experience to be taking advantage of at least some of the benefits. As Thomas has been pointing out in several of his posts, in one way or another, a lot of the mundane issues are there whether you're trying to express things in object-oriented terms or not, like what parts of the BL can reasonably be executed on the client and what you do about it, or how you bind data values to the UI in reasonable ways.

The specific point I quoted above is a key one, though, and one where there is ample room for debate. If I understand the point, I think I disagree about everything above the data source level being in terms of objects. I think one of the continuing benefits of working in ABL is that you can treat data in a fairly object-oriented way where you really want to, but you're not forced to. If you want to encapsulate the (originally relational) data very tightly in objects, you can certainly do that, but there are many times when it's still greatly advantageous to use good old FOR EACH loops and direct field references and so on in your logic, and as long as you control at an appropriate level who has access to the bare tables and DataSets in that direct manner, then you should be able to provide the level of control that is part of the OO paradigm. So I can provide limited or no direct access to my temp-tables etc. from outside a business entity (to use that as an example), but within the b.e. (and away from the data source) it can still be a great advantage to express business logic in the same way as we have always done, which is definitely relational -- fully OO languages don't really enable that. It's true that I can't pass an object across the wire yet in ABL, but then, given the percentage of the BL that represents the supporting logic for that object that I can't execute on the client anyway, that isn't necessarily a hindrance. I can encapsulate a relational DataSet in an object on the server-side, and then pass just the data across the wire to a separate object on the client side with very different responsibilities.

In general, you can take advantage of various benefits of the class-based language, including strong typing and some measure of inheritance and so on, and not be very truly object-oriented at all. Or you can be much more thorough in how much you treat things as objects and not just as code expressed as classes and methods. It's up to you; lots of rope to hang yourself with, of course, so could development models and guides are needed, but the flexibility is there.

Posted by Thomas Mercer-Hursh on 25-Oct-2006 16:24

I think I disagree about everything above the data source level being in terms of objects.

I hope that you are suggesting that can be in terms of temp-tables and ProDataSets, rather than something else.

If so, I have to wonder what possible advantage there is to not wrapping these in an object. If one passes temp-tables raw, then one has the ugly necessity of having temp-table definitions at both ends. Wrap that temp-table in an object and this problem disappears; one just uses the methods of the object to manipulate and access the data in the temp-table.

there are many times when it's still greatly advantageous to use good old FOR EACH loops

Nothing about using objects requires the for each loop to disappear. I am sure that, if you look in my collection classes you will find several of them. All I am wanting to do is to encapsulate them in objects so the code is re-used and centralized.

and direct field references

I hope you don't mean database fields somewhere above the data access layer?

Using properties, is there much difference between:

..myTempTable.myField

and

..myObject:myProperty

??

it can still be a great advantage to express business logic in the same way as we have always done, which is definitely relational -- fully OO languages don't really enable that.

I would be interested in some examples. It seems to me that one can put just about any logic instead an object that one could put in a procedure. Encapsulating that logic in an object is little different than encapsulating it in a PP or SP so that it becomes a service rather than in-line code.

I can encapsulate a relational DataSet in an object on the server-side, and then pass just the data across the wire to a separate object on the client side with very different responsibilities.

Which is one of the reasons I'm not currently that concerned about not being able to send the actual object. Sending just the data means that at the destination one can instantiate either an identical clone or it can be a related object with different properties according to its role in that context, e.g., it might be partially or fully read only.

but the flexibility is there.

Obviously, PSC has to provide a flexible tool because it has to support all the existing code and all the existing methodologies in existing shops. That has always been one of PSC's great strengths, although the current price of that strength is also high, i.e., the large number of people running on old versions.

But, I think people need to be very aware that there are choices which they should be making very consciously. Wandering back and forth between objects and procedures willy-nilly probably is not going to produce the best possible code. It certainly is going to complicate any effort to use formal modeling. Adopting OO is a discipline. Reaching out to grab a global variable when one can't figure out how to do something in the right OO way is not going to result in great code.

Posted by john on 26-Oct-2006 09:54

I hope that you are suggesting that can be in terms

of temp-tables and ProDataSets, rather than something

else.

>

Definitely, yes. Having a consistent in-memory logical representation of the data to pass around and work with is fundamental.

If so, I have to wonder what possible advantage there

is to not wrapping these in an object.

I guess what I'm trying to say is that, yes, on the one hand, you should definitely wrap that data in an object -- call it a Business Entity if that's what it is, or call it something else. But as with my later comment about FOR EACH's, it still seems appropriate to treat that data in a relational way within the DataSet, in those places within the object where the code is privileged enough to be allowed to work with 'raw' temp-tables, and in a way that is close to the way developers have coded in ABL all along (even when we called it the Progress 4GL...). Less privileged consumers see the data as an object and have more limited and controlled access to it. If the alternative is treating the data in a more object-oriented way even internal to the heart of the business logic, for example, treating fields as properties and putting a collection layer in between the temp-tables and the treatment of them, or whatever is involved, then that seems like more work that may not be strictly necessary and will get further away from the traditional 4GL/ABL value that is still very relevant.

Nothing about using objects requires the for each

loop to disappear. I am sure that, if you look in my

collection classes you will find several of them.

All I am wanting to do is to encapsulate them in

objects so the code is re-used and centralized.

So does it come down to a coding style within the business logic itself?

and direct field references

I hope you don't mean database fields somewhere above

the data access layer?

Absolutely; I'm not advocating any direct physical data references in the business logic, only that within 'trusted' logic inside the object, working with the 'raw' temp-tables and DataSets can be a close match to the way people work with the database data.

Using properties, is there much difference between:

..myTempTable.myField

and

..myObject:myProperty

Well, maybe the syntax looks pretty close, but the implication is that you've wrapped every field in a property, which is a lot of work, and then provided indirect access to the field through the property. Especially because the place where the property/field value is set or changed is typically not where all the necessary validation can take place, it doesn't seem to add enough value to justify the indirection in most cases. Using the DataSet in the way it's designed, with its before-and-after buffers, and then doing comprehensive validation when you get back to where you have all the access you need, still seems reasonable even in the context of wrapping the data in objects.

It seems to

me that one can put just about any logic instead an

object that one could put in a procedure.

No doubt, but it seems it could be more work and further from the continuing value of our language. Fowler, for one, in his patterns book, states that maybe a third of the total development effort of building a business app in an object-oriented language is bridging the object-relational gap. Our language provides the flexibility to treat data relationally but still wrap it in objects when you want that level of indirection.

I can encapsulate a relational DataSet in an

object on the server-side, and then pass just the

data across the wire to a separate object on the

client side with very different

responsibilities.

Which is one of the reasons I'm not currently that

concerned about not being able to send the actual

object. Sending just the data means that at the

destination one can instantiate either an identical

clone or it can be a related object with different

properties according to its role in that context,

e.g., it might be partially or fully read only.

Exactly.

>

But, I think people need to be very aware that there

are choices which they should be making very

consciously. Wandering back and forth between

objects and procedures willy-nilly probably is not

going to produce the best possible code.

Very true. We just need to deal with the reality that people can rarely completely rebuild from scratch in one go or completely rearchitect an existing application in one go. So interaction between objects and procedures is a way of bridging the gap between old and new and having a pathway forward.

Posted by Thomas Mercer-Hursh on 26-Oct-2006 11:11

If the alternative is treating the data in a more object-oriented way even internal to

the heart of the business logic, for example, treating fields as properties and putting a collection layer in between the temp-tables and the treatment of them, or whatever is involved, then that seems like more work that may not be strictly necessary and will get further away from the traditional 4GL/ABL value that is still very relevant.

To be sure, one of the obvious alternatives to collection classes like my generic ones is to create a collection class per object type containing a traditional temp-table. I can see that this might have some advantages in some cases, e.g., if one has a need for two indexes on the same collection at the same time. But, creating this kind of specialized collection class is actually more work, not less. With generic collection classes the job is already done; only the basic domain class needs to be defined. And, using a domain class keeps one from having to include a temp-table definition everywhere one wants to reference the collection.

Have you looked at my collection classes? They are based on temp-tables exactly because of the power they provide in terms of automatic ordering, not to mention the sophistication of slopping out to disk when they become large. Using temp-tables makes them significant more sophisticated than their Java counterparts. To me, this is using the power of ABL to do a higher level OO.

Well, maybe the syntax looks pretty close, but the implication is that you've wrapped every field in a property, which is a lot of work,

Err, here is the set of properties for a sports2000 Customer:

define public property cin_CustNum as integer no-undo get . private set .

define public property cch_Country as character no-undo get . set .

define public property cch_Name as character no-undo get . set .

define public property cch_Address as character no-undo get . set .

define public property cch_Address2 as character no-undo get . set .

define public property cch_City as character no-undo get . set .

define public property cch_State as character no-undo get . set .

define public property cch_PostalCode as character no-undo get . set .

define public property cch_Contact as character no-undo get . set .

define public property cch_Phone as character no-undo get . set .

define public property cch_SalesRep as character no-undo get . set .

define public property cde_CreditLimit as decimal no-undo get . set .

define public property cde_Balance as decimal no-undo get . set .

define public property cch_Terms as character no-undo get . set .

define public property cin_Discount as integer no-undo get . set .

define public property cch_Comments as character no-undo get . set .

define public property cch_Fax as character no-undo get . set .

define public property cch_EmailAddress as character no-undo get . set .

How is that a lot more work than a temp-table with the same fields? And how is this indirect compared to a temp-table. Especially since, if the temp-table is encapsulated in the domain object, one has to provide methods to access the values in the temp-table. Especially with properties, the total additional cost to access a temp-table of Progress.Lang.Object as a collection is one line to cast it to the specific domain object. After that, as my prior example shows, the difference is a colon instead of a period.

I certainly agree that ProDataSets have a lot of promise for the data access layer, where the before and after buffer has some real potential utility. I'm not sure that there is a benefit to flinging them all around though, at least as a general rule.

Fowler, for one, in his patterns book, states that maybe a third of the total development effort of building a business app in an object-oriented language is bridging the object-relational gap.

Given the right toolset, this seems overly pessimistic, especially for a new application where one has control over the design of the database. But, of course, this could also be a reflection that, once one had created a good set of objects, one third of the work was done.

We just need to deal with the reality that people can rarely completely rebuild from scratch in one go or completely rearchitect an existing application in one go. So interaction between objects and procedures is a way of bridging the gap between old and new and having a pathway forward.

Having patterns for the interaction of the two is, of course, not the same thing as having a conceptually blurry line between the two. The former is having a clear design approach for each paradigm as well as a paradigm for the interaction. The latter is not having a clear pattern for any of it.

This is an area where I think PSC should be looking to provide strong models. For example, that ghastly example in chapter 5 of the GSOOP book, once it gets fixed, could easily be extended to show a mixed procedural and object implementation. We need to be identifying issues and creating patterns for how to deal with them, like my paper on substituting an object for a session super.

Posted by john on 06-Nov-2006 14:54

To be sure, one of the obvious alternatives to

collection classes like my generic ones is to create

a collection class per object type containing a

traditional temp-table.

...

...

Err, here is the set of properties for a sports2000

Customer:

define public property cin_CustNum as integer

no-undo get . private set .

define public property cch_Country as character

no-undo get . set .

...

Thomas, sorry to be delayed in getting back to you. I've been busy hacking into all the neighborhood electronic voting machines in preparation for tomorrow's elections...

OK, I can see what you're doing, but I'm not yet seeing what the special advantages are. You show how you define each data item as a property (in anticipation of new language syntax in 10.1B). It seems that this has to be done on top of defining temp-tables and a ProdataSet to put them in (since you acknowledge that at least at the Data Access level, the PDS can be a useful feature). In addition, the property values have to map to their respective temp-table fields (the getters in your properties could do this, I suppose; do they in the full version of your code?). So what is the advantage of treating them as properties (other than to make values like CustID read-only, which can be handy)? Putting substantial validation code into the setters wouldn't seem advisable, since one of the ideas behind the property syntax is that there's no clear distinction from the accessor's perspective between a property and an ordinary data member (variable), so setting one shouldn't have surprising consequences (like getting an error). I don't think you'd ordinarily want to make a temp-table public either, in comparison with your public properties. And in today's language, you can pass only the temp-table or PDS as a parameter, not an object that encapsulates it. Can you clarify a bit more?

Posted by Thomas Mercer-Hursh on 06-Nov-2006 15:45

I've been busy hacking into all the neighborhood electronic voting machines in preparation for tomorrow's elections...

Apparently, around here you don't need to hack. There is a little yellow button on the back which, if you press it, will allow one to vote again and again. Unfortunately, it beeps and someone might notice.

OK, I can see what you're doing, but I'm not yet seeing what the special advantages are.

This was a reaction to your claim that providing such properties was a lot of work. With the anticipated property syntax, it is no more work than defining the fields in a temp-table. Not to mention, of course, that one would certainly hope that any basic objects like this should be created with some kind of generator assist.

this has to be done on top of defining temp-tables and a ProdataSet to put them in (since you acknowledge that at least at the Data Access level, the PDS can be a useful feature)

At present, the only temp-tables in my model are in my collection classes. There is no temp-table in the domain class because of the problem we found on the PEG about the impact of having 10s of thousands of temp-tables in a single session. Otherwise, it could be attractive for the ease of XML input and output, although these are not difficult to create without the special methods, i.e., easy to generate. I am reserving judgment on the PDS for the moment. Certainly, it is less attractive than it might be because there is currently no facility for a direct way to create objects instead of temp-table rows. This might cause me to forgo their attractive features in the same way that I am forgoing the attractions of READ-XML and WRITE-XML.

In addition, the property values have to map to their respective temp-table fields (the getters in your properties could do this

No, the property is the value. There is no temp-table in the domain class. Look at the code I posted in the beta forum. The finder and mapper objects there are placeholder skeletons, but the domain object shows the basic structure.

other than to make values like CustID read-only, which can be handy

Well, that is handy, but the main point of the object approach is that all of the properties and logic of the business entity get wrapped in a domain object and then one no longer needs to have a temp-table definition included whereever one wants to access even one of those properties. The business entity is strongly encapsulated and whatever goes on inside of it need not be known by the outside world. It is a great place to hide denormalized fields. The only bad part about properties is that they only work for single values and one has to resort to accessor methods for things like setCoordinatePoint( x, y).

putting substantial validation code into the setters wouldn't seem advisable, since one of the ideas behind the property syntax is that there's no clear distinction from the accessor's perspective between a property and an ordinary data member (variable), so setting one shouldn't have surprising consequences (like getting an error).

Well, that could be an argument in favor of using accessors all the time, which does mean more tedious coding (i.e., the need for a generator), but I don't know that I agree with the observation. To me, the virtue of properties is that they are very economical of code in definition while providing a full range of services such as read only or set only properties and any needed logic going in or out. Some people seem to also like the simplicity of referencing them. Public data members are also simple, but simply unacceptable. Properties provide the same simplicity, but safely and with control. Why is getting an error on an assignment so surprising ... haven't we had ASSIGN ... NO-ERROR in the language for a long time?

Can you clarify a bit more?

Have I?

Posted by Admin on 04-Jan-2007 15:27

I've added "dynamic invocation" to the Hive site. Its below just for reference.

I'd like to see dynamic invocation and reflection (as above). These can be confused as the same but are not. Reflection is about querying an object/type for what it is (e.g. methods, parameters and types). Dynamic invocation is about creating an object based on runtime data.

E.g.

def input parameter createClassName as char no-undo.

myCreatedClass = NEW value(createClassName) ().

Muz

Posted by Thomas Mercer-Hursh on 04-Jan-2007 18:48

Of course, dynamic invocation like this tends to remove all of the strong typing advantages of objects ...

What can you achieve with it that you can't achieve with something like a case statement? Or, in many cases, by overloading the methods to do appropriately different things with different data types?

Posted by Tim Kuehn on 04-Jan-2007 20:42

What can you achieve with it that you can't achieve

with something like a case statement? Or, in many

cases, by overloading the methods to do appropriately

different things with different data types?

Or by creating a "meta" object which instantiates and calls the appropriate delegate class instance?

Posted by Muz on 07-Jan-2007 15:06

A case statement? No that is way too much work to maintain. I'm looking for simple here. You could use this to work out what type of object you wanted as well. We could also use it like the old "switch table" DB lookup.

A "meta" class is ok again but it needs to be maintained.

Posted by Tim Kuehn on 07-Jan-2007 15:42

A "meta" class is ok again but it needs to be

maintained.

It doesn't matter what you write, you're still going to need to maintain it - and embedding the class instantiation in a "VALUE()" statement would spread that maintenance all over the application.

At least with a meta instance the developer only has to look in one spot to update things.

Posted by Muz on 07-Jan-2007 15:45

But you could use something like DoI (Dependency of injection) / inversion of control IoC (http://en.wikipedia.org/wiki/Dependency_injection) - as in the Spring Framework and "inject" what you want to run. I was thinking of running an application and looking up a DB to see what type of class I needed to instantiate.

E.g. Allow user to enter a Country name, then use this too look up the DB to get the correct "concrete" class type I want to run. If should also come with all the necessary items for that specific country. This country could then look up the type of "markets" that run in it and dynamically instantiate them as well.

OR I could write the code generically and "inject" the country I'm in at startup with all its dependencies etc....

Posted by Thomas Mercer-Hursh on 07-Jan-2007 16:21

I'm going to have to see some real solid concrete cases of where this approach actually achieves something that can't be reasonably achieved in another way before I either wish for it or consider using it. Kindred of run(value) are one of the worst things one can do to an application for making it impossible to analyze and trace, thus greatly complicating maintainability. So what if it lets you write one line instead of five. If five is clear and deterministic, then it should be five.

Posted by Tim Kuehn on 07-Jan-2007 20:25

I'm going to have to see some real solid concrete cases of where this approach actually achieves something that can't be reasonably achieved in another way

The only situation I can think of where going to a DB to figure out what classes to instantiate would be where there were some seriously complicated or non-linear (ie not easily code-able so it has to be table-driven) business rules that can be only or more easily tracked and maintained as db records rather than in code.

Having said that, I would think that whatever these business rules are, they could be resolved to a collection of specific categories of behavior. Those categories of behavior could then be mapped to the appropriate set of statically defined classes / procedure instances as required.

Posted by Admin on 08-Jan-2007 02:17

An example of dependency injection is constructor injection. This means that an object gets all it's dependent objects via the constructor. When you combine it with interfaces and an assembler pattern, you create a configurable application. When you abstract the assembler pattern and generalize it, you get some framework like PicoContainer (http://www.picocontainer.org/) or the MicroKernal (http://www.castleproject.org/container/gettingstarted/part1/code.html).

An example of this approach in relation to this thread:

interface ICountryProvider

public class Customer

{

public Customer(ICountryProvider countryProvider)

{

}

...

Here the concrete countryprovider will be created externally from Customer. A CustomerAssembler could select and create a concrete ICountryProvider, a Customer instance and return an initialized Customer. It's the CustomerAssembler's responsibility to assemble a proper Customer instance.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 11:41

This kind of table or logic driven flow of business rules is exactly the kind of thing which I think should often be implemented using ESB business logic tools. But, there is no need in that for the kind of dynamic execution being requested here.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 11:52

What is the purpose of this country provider? What is it supposed to be accomplishing?

I might comment that, while I think there is a lot of potential benefit in looking at what people have done in other OO languages, there are two very good reasons why we need to apply judgment as to whether we want to use what we see.

One is that not everything that is done in the name of OO is good OO. Sometimes the bad OO is a response to a problem encountered in the other OO language which has no really good solution in the language as it stands and sometimes it is just bad OO.

The other is that ABL is a 4GL, not a 3GL. This gives us a somewhat different context, making somewhat different solutions the most appropriate ones. A good example of this is having temp-tables as a language primitive.

Posted by Admin on 08-Jan-2007 12:04

The other is that ABL is a 4GL, not a 3GL. This

gives us a somewhat different context, making

somewhat different solutions the most appropriate

ones. A good example of this is having temp-tables

as a language primitive.

.... says the creator of the 4GL collection class

Posted by Admin on 08-Jan-2007 12:08

I might comment that, while I think there is a lot of

potential benefit in looking at what people have done

in other OO languages, there are two very good

reasons why we need to apply judgment as to whether

we want to use what we see.

I was not pushing my opinion, I just gave an example of the mentioned technique to contribute to the discussion.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 12:10

Precisely,

since my collection classes are structured quite differently than

the Java collection classes, exactly because of the availability of

temp-tables. Similarly, I am sure that as OOABL matures, we are

going to find many class types which are actually wrappers around a

PDS and thus implemented very differently than a corresponding

class in an OO3GL.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 12:19

But, what

I am still waiting for is an example that shows me why I would

want to use this approach rather than another one. Let me

illustrate with a little anecdote. When I was working on the

Collection Classes, I ran into the problem that there are some

limitations in the 10.1A version of OOABL which necessitated having

multiple instances of what was highly similar code, something that

I consider anathema. While 10.1B helps, this won't really get fixed

until we get a few more constructs in the ABL. The obvious

alternative is to use include files and preprocessor commands to

unify this code, but to me these constructs are also anathema. So,

what to do? In the end, I talked myself into using the

include/preprocessor route because this was foundation code. I

didn't like it, but it seemed to be the most acceptable option.

When possible, I will get rid of it. So, what I am looking for here

is a case in point where dynamic invocation is either so stunningly

more elegant or, better yet, where it makes something possible that

is not otherwise possible. If that example does exist, then I might

agree that the tool should be in the language, even if I would

recommend that it only be used in foundation code. If that example

does not exist, then I would contend that the "evil" aspects of

dynamic invocation would argue against it being implemented and

that one should use other, possibly slightly less elegant,

constructs because of the virtue they have for maintenance,

determinability, and clarity.

Posted by Admin on 08-Jan-2007 14:46

But, what I am still waiting for is an example that

shows me why I would want to use this approach

rather than another one.

In your Finder/Mapper example you have seperated concerns:

- finder is one concern

- mapper is another

The mapper could be defined by an interface and you could have concrete mapper implementations XmlMapper, DatabaseMapper and perhaps an ActiveDirectoryMapper for a specific entity. At runtime you may want to configure, which mapper the application should use. You can of course put that decission logic in the Finder-class, but you can also abstract that logic in an Assembler class. The Assembler is responsible for creating the proper mapper based on some configuration and pass it to the constructor of a Finder. This way the Finder doesn't care about the mapper implementation.

Constructs like this are most likely not easier to understand, but it will add some flexibility to your code. It also isolates the configuration part. This is similar to a Command-pattern, were you extract an important piece of code from a class and move it over to a dedicated class. This makes it possible to test that algorythm and potentially change it more easily.

Posted by Admin on 08-Jan-2007 15:00

I'm going to have to see some real solid concrete

cases of where this approach actually achieves

something that can't be reasonably achieved in

another way before I either wish for it or consider

using it. Kindred of run(value) are one of the worst

things one can do to an application for making it

impossible to analyze and trace, thus greatly

complicating maintainability. So what if it lets

you write one line instead of five. If five is clear

and deterministic, then it should be five.

I could send some in but its probably easier if you just Google for it. There is lots and lots out there.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 15:14

I think I am a regular contributor at all of the sites where OO ABL is discussed, so I don't know that googling for examples in other languages really is something I am inclined to do. I'm not looking for code, but rather for a explanation which I can evaluate as to whether I think the advantage outweighs the disadvantages.

Posted by Thomas Mercer-Hursh on 08-Jan-2007 15:20

Indeed, an inherent part of my design is that the Finder doesn't care which flavor of Mapper is present, but I wouldn't have a Finder make the decision which kind to instantiate ... that is a configuration issue.

Posted by Muz on 08-Jan-2007 15:29

I think I am a regular contributor at all of the

sites where OO ABL is discussed, so I don't know that

googling for examples in other languages really is

something I am inclined to do. I'm not looking for

code, but rather for a explanation which I can

evaluate as to whether I think the advantage

outweighs the disadvantages.

Thats correct. The simplist examples I can come up are listed below (they are way simple).

1. WebServices - Dynamic Invocation

In this case we use dynamic invocation to call a WebService by parsing the WSDL document that describes the Web Service at run time to generate the corresponding call. The same sort of thing applies in the OO world.

2. Objects

I want to create a new object, lets say a tree. I want to create a new object (that is built on tree) but which tree I want is in a text field. So I get:

myObject = createNewTree(text).

I don't want to change this method when I add new "trees" to the database, I'll just add the objects in later., so I could do (yeah its isn't syntatically correct).

def temp-table myTrees no-undo.

field name as char

field myTree as TREE (object).

for each tree where treeType = "loosesLeaves" no-lock:

create myTrees.

assign

name = treeName

myTree = createNewTree(name).

end.

There are plenty of other (and probably much better) examples out there, but hopefully it helps. I think part of the problem is that there is so much we would like to do (e.g. Aspect Oriented Programmed, IoC, DoP) and they all add to each other that it gets confusing which is which.

Muz

PS As an aside:

I would point out that we need to look at other languages in this case as PSC doesn't have anything (except maybe run (value)). Personally I find it much easier to understand a concept if I can see code.

Posted by Admin on 08-Jan-2007 16:14

Other items I'd really like to see included are: Annotations, SSLBs and EJB QL. See this link for a nice simple overview: http://www.javaworld.com/javaworld/jw-08-2004/jw-0809-ejb_p.html

I think it speaks for itself.

Muz

Posted by Thomas Mercer-Hursh on 09-Jan-2007 11:47

Code is certainly useful for making an abstract concept concrete and thus easier to grasp, but I think that the issue we are having here is one level above specific code. The issue is not, "here is a way in which this might be used", but rather is "here is a place where I must use it" or "here is a why in which this approach is substantially better or faster or something than other approaches".

The reason the question is in this form is because dynamic invocations have at least three nasty features:

1) They break the strong typing advantage of OO;

2) They raise the potential for run-time errors due to no matching target; and

3) They break analysis tools.

To offset this, one needs some compelling advantage, not merely that it is possible to use in this way.

For some dynamic constructs, e.g., dynamic browse, the argument is that there is a reduction in development effort and maintenance because a single generic routine can be used for a number of tables. Personally, I have always felt that generation was a better solution, but that is a somewhat different issue.

It has been suggested that an advantage of dynamic invocation over something like a case statement of specific invocations is that the invoking code doesn't need to be changed when a new target is added. Again, generation can solve this for us, but even without that I think the argument is weak because adding one line to the case is not an arduous task, considering the downsides.

For your web services example, it seems to me that we have a fixed list of potential targets and an input which tells us which target to choose. This can be easily handled by either the case or meta class approach. If we make it dynamic, it is true that we don't need to modify the invocation code to add a new service, but unless we validate the input against a list of acceptable inputs ... i.e., check it with case or lookup or something ... then we run a high risk of runtime error when an unexpected or malformed input is received. But, if we validate it prior to execution, then we might as well have the static execution link.

For your second example it seems to me that most of the example has to do with mechanism which is predefined for the purpose of dynamic invocation rather than any identification of why we want to do any of this in the first place. E.g., suppose that we have a number of related classes which have to do with different types of trees ... why is this not best handled by having a superclass for all trees and using the appropriate decision matrix to invoke the correct type of tree in each instance. In order for this to be useful at all, these subclasses need to have conforming signatures ... so they should be subclasses of the common base class. The only thing we are left with is the decision matrix of what kind of subclass to instantiate and presupposing this to be in a text variable tells us nothing about what that decision matrix is like, nor does it keep us from using a simple case statement to make the choice, thus gaining determinability and protection against bad input.

Posted by Thomas Mercer-Hursh on 09-Jan-2007 12:06

Well, I suppose that depends

on who is listening and who is speaking. There is an old

observation that a language improvement is much more likely to

happen if the request for the feature is important to the tools

group than if it comes from a "mere" customer (something that I

think was more true in the past than it is now). It is certainly

still the case that, if the language group already see the need for

a feature, whether because of market issues or customer input or

even just because they think it is important, then that feature is

more likely to get on the list for any one release ... that is only

natural. For anything not in these categories, there is a need to

communicate effectively about what kind of difference a feature

would make so that those who make the decisions will understand the

issue and internalize it. I don't think that pointing at a list of

"cool things in EJB 3.0" is going to do that communication very

effectively since the current ABL platform is nowhere close to

resembling EJB anything. How can it be without multi-threading?

Besides, there is a lot about EJB that is all in the package, as it

were. I.e., what it does specifically is part of a whole package of

interrelated features which can't really be implemented one by one

in some entirely different context. Which is exactly what one would

have to do to add them to ABL any time soon. If you want new

features, I would identify them clearly and individually and say

why you want them. My original wish list is fairly terse, but that

is because I know that the concepts are already under consideration

or I have or will be undertaking to elaborate on them separately.

I'm not likely to include something that is just a vague pointer in

any such elaborations.

Posted by Admin on 09-Jan-2007 12:21

How can it be

without multi-threading?

Making asynchronous AppServer calls is a nice way to do parallel tasks....

Posted by Thomas Mercer-Hursh on 09-Jan-2007 12:32

Fine for things where one can stand the overhead of

the call, but this is *way* over the kind of overhead of

information exchange between multithreaded components. There is a

question of degree. For some things, sending a message over an ESB

is just fine. For others, it is massively too much.

Posted by Admin on 09-Jan-2007 14:51

I don't think that pointing at a list of "cool things

in EJB 3.0" is going to do that communication very

effectively since the current ABL platform is nowhere

close to resembling EJB anything.

An there in lies part of the problem. ABL should continue to be a fast and easy way to write business applications. The items I listed in the EJB3 spec will help us do that more effectively. Why can't I do something like this to get a webservice and a .NET proxy by just doing the code??

@ResultSet ClientResults

def temp-table outputTT no-undo

@UniqueID

field id as int

@ClientName

field name as char

index id id.

@Stateless

@WebService

@NETProxy

procedure pubGetClient:

def var searchName as char no-undo.

define some output temp-table of results.

for each customer where custname matches ("" + searchName + "") :

...

end.

On the other stuff, we could use something like EJB QL to auto-generate ProDataSets or hide complex database queries etc.

On the dynamic stuff. Ok - trees wasn't the best example. But there are others. Replace the TREES with MyStaticObjects OR mySingletonObjects and then dynamically instantiate all my singletons.

If you want new features, I would identify them clearly and individually and say > why you want them. My original wish list is fairly terse, but that is because I

know that the concepts are already under consideration or I have or will be

undertaking to elaborate on them separately. I'm not likely to include something

that is just a vague pointer in any such elaborations.

Yes I could but to me and the guys I showed at work it was obvious so I thought it would be redundant, and then I probably couldn't come up with an example that everyone would agree on anyway

Posted by Thomas Mercer-Hursh on 09-Jan-2007 16:24

I think that a lot of the problem I am having in turning your request into something that might eventually make itself onto a feature list for a new release stems from two factors. One is that the actual need is too vague and the other is that the brush stroke is too broad. Let me clarify these a little.

I'm sure you have noticed on other threads how Salvador keeps saying "give me the business case". All too often, he gets "it's obvious" or some version of "you owe it to us" or whatever as a response, i.e., no one sits down and actually describes the real impact in development or deployment -- what can and can't be done, what difference in efficiency would be achieved, what facility could be provided to users, etc.

With respect to the dynamic stuff, it is apparent how it can be used, but it is also apparent that it is a construct with some real negative aspects and also with some other ways to accomplish the same effective purpose which are not ultimately all that onerous. If that is all that there is, it seems to me that the evil outweighs the good and we should forget about it because it will just provide another way for people to do ugly programming. To outwiegh the evils, we need a good solid example of something that we can only do that way or where the alternatives are really ugly.

With respect to this EJB stuff, it is all just too vague and it isn't at all clear how it applies. This brings us to the broad brush aspect. There are lots of times I have run into something in another language that I like or admire, but which is hard to adapt to ABL because the constructs and context are different. ABL is already way too complex due to the unfortunate history of putting everything into the language instead of putting some of it into foundation libraries. E.g., QL ... we already have a ton of verbs and modifiers for doing queries. Is it perfect? No, there are some very clear and focused extensions that should happen like no-index scans (especially, when they are already in the engine!). But, yet another query syntax and structure? Why? You can wish it had been done differently in the first place ... there are a number of things I wish that about ... but they aren't going to get swept away and its an ugly solution to keep adding yet more ways to express the same thing. Keep that up and it will be as bad as English!

I also really encourage you to think about program generation. That allows you to do most or all of your work one level up from how it is expressed in the language. Then, while it continues to be nice when the language is elegant and compact, ultimately the question is only "can the functionality be achieved?", not worrying about how it is achieved.

If it is an important solution to a real problem, you should be able to come up with a whole bunch of real world examples. If you can't, you should wonder whether it is a problem in search of a solution. I can't say that "Hello World" exactly stirs my blood in terms of usefulness.

Posted by Admin on 09-Jan-2007 16:27

Ok - let me change tact for a second.

Q: What is the business reason behind wanting dynamic invocation, annotations etc?

A: Its far FAR more expensive for me to change code than to add records to a database.

1. If I can create a dynamic "run" and use a DB lookup to find what to "run" or "instantiate" then I can add new features much more cheaply. I don't have to change the core program (less testing) and there is no impact on the other functions that it might run. I also need to jump though less "hoops" internally and with the clients for such a change.

2. Doing the above also enables me to separate parts of the application into controllers and enablers etc

3. Creating (and not changing) code involves no upgrade to compiled code but only new code. This changes the way both we and clients do testing and the "risk" involved from their point of view. Now this may not seem like a logical idea but its some of the "corporate rules" I have to deal with, I don't make them, I just have to live with them.

4. Adding DB records is no considered as risky as changing code.

5. Using "annotations" I can generate my code (e.g WebServices) rather than having to write and maintain it. This massively reduces the risk of bugs or errors, it also means that the API is ""always"" in line with the business logic as its re-generated at each release.

6. Generating code also allows me to add annotations for things like @APIVersion and create backward compatability

7. Having a dynamic instaniate allows me to create a holder for all "global" variables and "persistent procedure" objects in one place that is shared. I can have one "create" statement that will create any type of global variable (no matter the type) and one to create and setup each persistent procedure and a final one for temp-tables. The same object can keep track of global variables that need to be refereshed or listen for JMS messages from something like Sonic and keep itself up to date with database changes (these messages could come from replication trigger updates or possibly from the new auditing changes - but I have not thought this through yet).

8. Dymaic runs also allows me to "replace" items at run time. E.g., if I'm a support user I can dynamically run the support version of code.

9. Yes I agree with you that dymaic run is like dymaic TempTables and dymanic browse but I use these everywhere and I've be totally lost without them. I can also see myself using dynamic run a lot in the OO world. Maybe I could every create objects on the fly and then dynamically instantiate them - now that would cool!!!

Anyway thats enough for now

Murray

PS Yes it does make tracability much harder but we can get around that too.

Posted by Tim Kuehn on 09-Jan-2007 16:32

7. Having a dynamic instaniate allows me to create a holder for all "global" variables and "persistent procedure" objects in one place that is shared. I can have one "create" statement that will create any type of global variable (no matter the type) and one to create and setup each persistent procedure and a final one for temp-tables. The same object can keep track of global variables that need to be refereshed or listen for JMS messages from something like Sonic and keep itself up to date with database changes

1) If you're not talking about code generation, then you can code the meta class with a CASE statement and get whatever class instances you want. In this case all the DB is doing is providing the source data to target object mapping.

2) The above sounds a lot like what my procedure and session variable managers already do. They're available in the PSDN Code Share area and the PEG utilities page.

Posted by Admin on 09-Jan-2007 16:36

Tim,

Thanks for that, yes I did see you procedure/session manager stuff. We build one of these back when V9 (or was it V8) first came out. I'd like to move it into the OO world so I will be taking another look once I finally get our app onto V10.

On the CASE, yes I could but I'd much rather do a DB lookup (or did I miss something)??

Murray

Posted by Tim Kuehn on 09-Jan-2007 16:47

Tim,

Thanks for that, yes I did see you procedure/session

manager stuff. We build one of these back when V9

(or was it V8) first came out. I'd like to move it

into the OO world so I will be taking another look

once I finally get our app onto V10.

Take a closer look at these managers, because they support all kinds of different scoping, sharing, and nesting models - more than I've seen other procedure managers handle. It allows a developer to code SPs to an object-oriented-ish structure while still using the usual procedural techniques.

If there's some custom help you need, I also provide consulting services in this area.

On the CASE, yes I could but I'd much rather do a DB

lookup (or did I miss something)??

I think we're missing each other, so I'll try again.

Your application consists of a static set of available classes. A static set of classes means code to instantiate new instances of those classes can be coded to a CASE statement.

There's also the changing set of data or business rules which needs certain classes to do their thing. This data / rules to class mapping is stored in the database.

When it comes time to manipulate the data, something does a lookup to figure out which class is needed. A a method of the meta-class is called with a request for an instance of "class X." The meta class feeds that class ID to the CASE statement which has code to create an instance of the required class, and then returns the object's handle.

No need to dynamically instantiate a new class instance.

Posted by Thomas Mercer-Hursh on 09-Jan-2007 16:51

Only if there are no

dynamic invocations! Otherwise it is a lot more risky. If I make

a typo in the database I find out at run time. If I make the typo

in code involving a class, then I find out at compile time.

And frightening! In some

cases ... but not if you are pulling the values from a database.

Posted by Admin on 09-Jan-2007 17:30

When it comes time to manipulate the data, something

does a lookup to figure out which class is needed. A

a method of the meta-class is called with a request

for an instance of "class X." The meta class feeds

that class ID to the CASE statement which has code to

create an instance of the required class, and then

returns the object's handle.

No need to dynamically instantiate a new class

instance.

Ahh not quite. My case statement would, that this stage, have more than 800 entries so its not really going to work. Also, its still going to require me to change the code if I add a new object, which I don't want to do.

On another site, if I do the dynamic run, I can also give the interface to clients and allow them to create their own functions to "plug-in".

Rule1: I don't want to change code to add/expand functions.

Rule2: See Rule 1.

Posted by Admin on 09-Jan-2007 17:32

If I am going to test the target of a dynamic run, I can have a unit test to test

the target independent of the context, regardless of how it is invoked, but, if I

am going to test it in context, then I have to run through the code which

contains the run, dynamic or case. Seems like the same test to me.

Not quite. Its all about "impact". If I have a case with 800 entries and I change it then I need to test 800 cases. If I have a dynamic run and I add one DB record, I need to test 1.

Posted by Thomas Mercer-Hursh on 09-Jan-2007 17:49

I have some question about an application design that has a single point of control with 800 different options.

If you have this situation, what are you doing now?

Posted by Tim Kuehn on 09-Jan-2007 18:23

I have some question about an application design that has a single point of control with 800 different options.

Ditto - that's just nuts. Surely that could be condensed down to a smaller subset of classes / cases.

It sounds like something brought forward from a legacy application which didn't have the ability to do dynamic stuff, and never got retro-fitted.

Posted by Tim Kuehn on 09-Jan-2007 18:25

Ahh not quite. My case statement would, that this stage, have more than 800 entries so its not really going to work. Also, its still going to require me to change the code if I add a new object, which I don't want to do.

If you've got the code for 800 different classes already, you've written code for those classes.

If you want to dynamically instantiate a class, that's just another couple of lines in a case statement - or a hash to something if need be. 800 WHEN's is too much. Surely these classes could be aggregated together? 10 sub-meta-classes that handle 80 class types each would make each individual CASE structure more manageable.

On another site, if I do the dynamic run, I can also give the interface to clients and allow them to create their own functions to "plug-in".

In other words, you're writing code on the fly. These aren't instances of statically-written class definitions.

Rule1: I don't want to change code to add/expand functions.

Rule2: See Rule 1.

I think you're already violating both these rules if your other applications' writing code on the fly like this.

But 800 classes? Something's not right here.

Posted by Thomas Mercer-Hursh on 09-Jan-2007 18:35

I'm going to guess there aren't 800 classes or anything like it. I'm going to guess that there are 800 somethings ... and quite possibly it isn't all ABL.

Posted by Tim Kuehn on 09-Jan-2007 18:53

I'm going to guess there aren't 800 classes or anything like it. I'm going to guess that there are 800 somethings ... and quite possibly it isn't all ABL.

Variations on a theme using compile-on-run include definitions? Like the old

run something.p value1 value value3.

Posted by Admin on 09-Jan-2007 19:04

I'm going to guess there aren't 800 classes or

anything like it. I'm going to guess that there are

800 somethings ... and quite possibly it isn't all

ABL.

I'm sort of getting into areas I'm not allowed to discuss But seriously there are lots of cases where I would need a case statement and I don't want to go that way. Its way too much work for me as its already set up to be dynamic ...

Posted by Admin on 10-Jan-2007 01:49

1) If you're not talking about code generation, then

you can code the meta class with a CASE statement and

get whatever class instances you want.

I guess some of you miss the point here: this particular part of the discussion is about separation of concerns. When you design a class, you will always ask yourself: "is it the class's responsibility to .....". So when you have a configurable part in your application, either hardcoded, config-file driven or database driven, you can ask yourself who is responsible for applying the configuration.

Some would say that a DiskMonitor class, who monitors available diskspace, is responsible for sending out email alerts. It should therefor determine how alerts are published.

Others would say that the DiskMonitor's responsibility is to monitor the disk and it's an AlertSender's responsibility to deliver an alert. Something else should setup the monitor and the AlertSender(s). You can imagine the second approach when you have several types of alert-output (*).

When you apply this same example to an event based system, you can also ask yourself who will subscribe itself to handle the "low diskspace alerts" the DiskMonitor is publishing. Something should wire the DIskMonitor and the EmailAlerter together in this case.

*) you can go even further and decouple the two by adding an alert-queue (producer-consumer pattern). The monitor produces it's alerts in the queue and the alertsenders consume from the queue.

Posted by Admin on 10-Jan-2007 02:09

1. If I can create a dynamic "run" and use a DB

lookup to find what to "run" or "instantiate" then I

can add new features much more cheaply. I don't have

to change the core program (less testing) and there

is no impact on the other functions that it might

run.

When it's done right, yes. But don't forget that the database contents now become part of your application, so deleting a couple of rows by mistake ruines your application. That's what makes annotations in code attractive (like you explained as well): you're putting the meta data in the code itself.

Posted by Admin on 10-Jan-2007 02:15

I also really encourage you to think about program

generation. That allows you to do most or all of

your work one level up from how it is expressed in

the language.

Code generation is nice, but it has it's place. A codegenerator can sometimes become more complex than the actual code it's generating. And one change to the generator and you have to test the entire application.

And I really don't believe in MDA, which maps the model to a platform specific, runnable application. I don't know any major application that has been delivered by MDA. Perhaps you can show us one, be more concrete about it. Sure, it can be done for specific areas in the application. At Microsoft they calls this Software Factories which use Domain Specific Languages (modeling combined with target converters).

Posted by Thomas Mercer-Hursh on 10-Jan-2007 11:42

But, since we have no dynamic class invocation, it

obviously isn't set up with classes today. So, to get it to classes

there is going to have to be some redesign and rework ... at least

I would hope that you thought that. Not to mention that getting a

generator to create that case statement for you has to be among the

easier jobs I can imagine.

Posted by Thomas Mercer-Hursh on 10-Jan-2007 11:46

I think we can all

agree that this is a part of good OOAD ... I just don't see that it

has anything to do with the mechanism by which alternate classes

are instantiated.

Posted by Thomas Mercer-Hursh on 10-Jan-2007 11:52

Yes, and its

place should be central! But, the corollary is

that one change to the generator and one can have that change

distributed everywhere in the application. To be sure, lots of

generation creates a strong motivation for automated test suites,

but if I made a change which, for example, added a new or changed

functionality in every file maintenance function in the

application, I would much rather have the problem of testing the

generated code than testing every place where a programmer had gone

in and made the change by hand. FWIW, I have about a million lines

of ABl that came from code generation.

Posted by Admin on 10-Jan-2007 12:14

FWIW, I have about a million lines of ABl that came

from code generation.

I can create two million lines with a generator How complex did the generator get and how many manyears did you spent on it? It must be very easy to change a switch in the generator and generate a 3-tiered architecture with classes instead of procedures/includes....

Posted by Thomas Mercer-Hursh on 10-Jan-2007 12:39

Was it necessary for me to indicate that this is 2/3 of a production system?

One that was sufficiently functionally rich that it beat SAP, Peoplesoft, and Oracle Financials head to head.

But no, being a technology I created 16 years ago, it isn't quite nimble enough to be giving me a OERA-compliant, ESB-enabled application today. For that I will need to move to new technology.

Posted by Thomas Mercer-Hursh on 12-Jan-2007 18:36

The more I think about this, the more I wonder about the implementation. To be very useful, it seems to me, one needs a set of closely related objects with the same signature, i.e., which have the same superclass or implement the same interface(s). Otherwise, one doesn't merely need to branch in the code for the creation of the object, but one needs to branch in the usage as well. Only when the usage is uniform is there an advantage in invoking multiple different objects as if they were variations on the same thing.

And, of course, saying that makes me really question the idea that there are 800 of the same thing.

But, however many there are, isn't a very sensible way to treat objects of this type by creating a factory object to create all the objects of within the class. That, of course, requires the CASE statement or some other kind of logic on the supplied parameters in order to determine what type to create, but it is a single point of maintenance. If you add a new class to the set, you don't really need to retest all 800 variations unless you are being very, very careful ... in which case you probably have an automated testing environment anyway. All you really care about is that it produces an object of the new class properly formed.

If you need to test every place that uses that new class, I suppose you will feel that you need to test it however it is created. But, if this is the 801st class, I don't see why you would need to test all of that code on the first 800 because those have already been tested. And, for that matter, if all you are using is methods of the superclass or interface, that is validated by the compile. If the 801st class is yet another subclass or instance of the interface, that is also tested in the compile.

And, please note, it is tested in the compile because the instantiation is explicit. That testing could not take place if the instantiation was truly dynamic.

Posted by Thomas Mercer-Hursh on 30-Mar-2009 12:06

Sure, I can write in a mixed mode ... just like I can write procedural code in Java.  But, there are those of us who see it as a shortcoming when we have to do this.

Bottom line here, I think, is:

1. There is a substantial community of people in the world writing in a variety of languages who are convinced that the OO paradigm is a superior approach because of the clarity and simplicity and cleanliness that comes from encapsulation, not to mention the ease with which such code interfaces to modeling tools.  Some of that community is within the ABL community.  It would be good marketing as well as a benefit to the developer community to fully enable the OO paradigm in ABL.  Great strides have been made, but there are still an undesireable number of places where one is forced to use procedures.

2. There is likely to be a lot of mixed mode code written if for no other reason than that people have large bodies of existing procedural ABL which they can't convert overnight to OO (although I am working in ways that they might convert it with far less effort).  But, if the goal is to eventually move to OO, it would be a bad idea to compromise the OO pieces that one adds to a procedural code base unless *absolutely* necessary because that will only mean more cleanup later.

Just because one *can* write mixed mode and it even seems to make things "easy" in the sense that one can continue to use familiar techniques doesn't mean that mixed mode approaches have greater virtue than biting the bullet and figuring out how to do things in an OO way.  I contend that the OO way is ultimately better, so one might as well come to grips with it and do it right from the beginning (except for those places where the OO implementation in ABL is still complete).

This thread is closed