I have a "contact" model object (attributes only, no methods). This object holds the details of a contact (name, phone number etc)
I also have a "client" model object, and in this object I have a variable array of contact objects, so I can iterate through all the contacts of the client.
However, given a contact id, I want to find the client / clients that this contact is assigned to. So, it would make sense to create a variable array of client objects within the contact model
But these client objects have a contact array ...
Given this scenario, what is
a) best practice
b) Theoretical
c) practical
Should I create two types of class for clients and contacts (with / without the variable arrays)
Julian
Could be that you are finding this more puzzling because of your chosen implementation, i.e., internal arrays rather than collections. With an internal array, there is a tendancy to think of the contents as part of the object. But, in fact, what you have is a relation between objects, either a relation between a contract and the clients it corresponds to or between a client and the contracts it has. Collections are merely the plumbing that one uses to express a one to many relationship. That relationship may or may not be relevant in a particular context and, when it is not relevant, it is not populated.
So, depending on context you might have one or more contracts (more than one => another collection). Clients may or may not be of systematic interest. If they are only sometimes of interest, then instantiate the clients and build the collections as needed. In a different context, you might have one or more clients (as above). The contracts may or may not be of systematic interest. When they are, then instatiate the relevent contracts and populate the collection you need.
If you run into a situation where you need both contracts and clients extansiated and need to walk the relationships in both directions, then instantiate the needed objects and collections on both sides. No problem. E.g., it is perfectly reasonable to walk the relationship from a contract to related clients and then to walk the reverse connections to come up with all of the contracts which affect this group of clients. Since collections are just set of pointers to objects, no object will get instantiated more than once. E.g., if a contract points to two clients and the same two clients also have two other contracts, you will end up with three contracts, two clients, one collection of clients, and two collections of contracts. That actually describes the relationship perfectly.
>> Could be that you are finding this more puzzling because of your chosen implementation,
Possibly, but before I read the entire post, I implemented it this way
so that I could code like
message Customer1:Contact[1]:Name view-as alert-box.
Granted, trivial code example, but could I do that with collections ?
It's also very handy because you can use extent(Customer1:Contact) to
know how many contacts that client has
If the relationship is not relevant, could I not simply have two
methods in the client controller
Get
GetWithContacts (which uses the Get method in the contact controller
(ie no clients))
and in the contact controller
Get
GetWithClients (which uses the Get method in the client controller (ie
no contacts) )
Julian
On 2 January 2011 22:33, Thomas Mercer-Hursh
Given variable arrays, you sort of need to know the count of the set before you can create the array ... or incur a lot of overhead expanding it. You could make the array big enough to handle most cases based on your knowledge of the domain and avoid the expansion problem, but then you couldn't use the extent to find the count. Given that you have to fetch the members of the set to build the collection, however that collection is implemented, then knowing the size is trivial anyway.
My main point is to get you thinking in terms of relations ... however you decide to implement the collections. Collections is the Right OO way to do it, but do what you will.
Rather than thinking of special methods, I would rather see you do a form of lazy instantiation. I.e., instantiate just the base object first and then if you do anything that needs to access the collection, notice that it is not yet instantiate and go ahead and instantiate it at that point. You could do this pretty easily with Count by using ? for unitialized and 0=>N once initialized.
So what you are trying to batter into my thick little head is that
within the Client object, I have a Contact property (which is a
collections object, not a contact object)
So I can use it like
Client1:Contact:Item(1):Name (for collections)
instead of
Client1:Contact[1]:Name (for variable arrays)
and then use lazy instantiation on :Contact: (within the getter of
:Contact:, check if it's a valid collection object, if not create one
and populate)
That would work for me.
Would you recommend a standard collections object for all cases like
this (casting issues), or a specific collection object for contact,
client etc (coding issues - mitigated by code generators)?
Thanks for the help.
Julian
On 2 January 2011 23:05, Thomas Mercer-Hursh
Would you recommend a standard collections object for all cases like
this (casting issues), or a specific collection object for contact,
client etc (coding issues - mitigated by code generators)?
I personally would prefer the strong typed solution. If your are not yet using a code generator, consider writing the collection or list class using an include file with the target type/class as an include file parameter - as mentioned in the super hijacked thread on Generics. A very practical solution.
Inlcudes == eeek
Code generators would work a lot better for me.
I must say that I think strong typed makes more sense for me in this situation.
Thanks
Julian
In normal OO usage, collections are almost always generic, exactly because they are a piece of the infrastructure, something the code generator sticks in to realize a one to many relationship in the UML rather than something that is shown in the UML diagram per se. There are, like anything, exceptions where on can benefit from a non-generic solution, but one thinks of generic first. This goes along with reuse, of course.
No, you are not going to directly address a property in an object in a collection. At least, not with the usual usage of collections. Instead, normal practice is to iterate through the members of the collection, retrieving each member to a current object, and then doing whatever operations you need. For UI cases, it can be sensible to use temp-tables to hold this sort of stuff rather than collections, so I am thinking here more of something like "renew all contracts for this person for an additional year" types of operations.
In fact ... while I try to avoid thinking about UI as much as possible ... I'm not sure that for UI purposes I wouldn't just go with a PDS containing tables and join tables and forgo the actual objects themselves. At that point one is trying to provide backing data for a grid or whatever, not do business logic. Mike will probably faint when he reads this!
Usually one navigates a collection with something like GetNext() that returns the next member of the collection and throws an error when there are no more. Yes, not having a valid handle for a collection object is a perfectly reasonable indicator that one needs to go instantiate and populate it. Collections do normally have a Size property.
This http://www.oehive.org/CollectionClassesForOORelationships covers my most recent thinking about collections. I haven't done the code yet, though.
Strongly typed means a different collection class for every purpose ... each of which does almost exactly the same thing. As Mike says, the empirically pratical way to do this is with includes, but I share your disaffection for includes.
Besides, I question the gain. The purpose of the collection is just to manage the relationship. At any one time, you are only going to operate on any one member of that set, e.g., CurrentContract. So, GetNext into an object of the right type and presto all the methods and properties of that time are immediately available. No different really than iterating though a temp-table and operating on the current buffer. This is something that you can do entirely generically and without then having a million type-specific collection classes that all have the same code. Presto, hundreds more objects and failure to re-use, not to mention having to use includes to get there efficiently.
Mit freundlichen Grüßen / Kind regards
Inlcudes == eeek
Old school - but better than copy and paste.
Code generators would work a lot better for me.
"would" sounds a lot like you don't have one yet? Time to get started, then. Or bite the pill with the includes.
Besides, I question the gain. The purpose of the collection is just to manage the relationship. At any one time, you are only going to operate on any one member of that set, e.g., CurrentContract. So, GetNext into an object of the right type and presto all the methods and properties of that time are immediately available.
Sorry, are you saying a non-typed, non-generic collection would be sufficient? That would require permanent casting and runtime type-checking that both Julian and I want to avoid.
I'd rather create 1.000 (sure no app would require million's) generated or include file based typed collections.
Mike will probably faint when he reads this!
Whether one uses includes or one uses a code generator to produce what the include would result in, one still ends up with N objects which have entirely the same logic, but different data names. Not only is this horrible in terms of code reuse, but it implies that one is putting logic in the wrong place. The only reasons I can think of to want type-specific names inside a collection are either to be able to treat it like an array, i.e., the sort of addressing which Julian was indicating in his code fragment, or to provide type-specific behavior as a property of the collection.
I see no reason to try to treat the data like an array except for cases when the data is actually an array and then it is containing in one object. What is the use case in which one wants to go directly to the Nth row/object rather than iterating through the set? And, even if one does, how is this not satisfied by a generic key/value collection? One doesn't even have row/column addressing for temp-tables. One always has to locate the desired row and then access its properties by name. So should it be for objects in collections. Locate the desired object (typically the next) and then access the properties and behavior of that object.
Likewise, type specific behaviors of a collection are also very dubious. For starters, one then not only has N different objects, but each one of those potentially has type specific behavior. Uggh. And, to what advantage? What kind of type specifc behavior are you going to put in the collection which doesn't instead belong in the parent object?
I'm saying that a non-typed *generic* collection would be sufficient. The collection stores objects as Progress.Lang.Object. The object using the collection treats them as what ever type they actually are. If you store an object of type X in a PLO, it doesn't stop being an object of type X. There is no big conversion that needs to happen.
The collection stores objects as Progress.Lang.Object.
We agree on that.
The object using the collection treats them as whatever type they actually are. If you store an object of type X in a PLO, it doesn't stop being an object of type X. There is no big conversion that needs to happen.
That's the point that I have a problem with. I see a fundamental different between a List cannot store a Supplier or System.Windows.Forms.Button or Progress.Lang.AppError object. So it makes the interface/contract more robust.
Regarding the implementation and actual code duplication: When using includes or a code generator, that is the only way I am aware of in the ABL (beside stupid hand coding) that would allow the creation of a type-safe List or Collection etc.. This does not say that I actually duplicate the code that manages the set internaly. That could be done in a base class working on Progress.Lang.Object. The typed List would just handle the CAST and provide the type safe interface.
I see a lot of benefit and simplification of use in that.
So my ListOfCustomer would inherit from an abstract GenericList class.
The method GetItem of ListOfCustomer looks like this:
PUBLIC METHOD Customer GetItem (I AS INTEGER):
RETURN CAST (SUPER:GetItemInternal , Customer).
END METHOD .
GetItemInternal of the GenericList class returns Progress.Lang.Object.
Looks much better to me than having the CAST all over the place where I'm using the Collection of List. Plus, I can trust that I get a Customer (or better) when the method returns a Customer. You should not trust anybody (except yourself) returning a Progress.Lang.Object that it returns what you expect.
Whether one uses includes or one uses a code generator to produce what the include would result in, one still ends up with N objects which have entirely the same logic, but different data names.
... not to forget the different interface. That is what makes the point.
Not only is this horrible in terms of code reuse, but it implies that one is putting logic in the wrong place. The only reasons I can think of to want type-specific names inside a collection are either to be able to treat it like an array, i.e., the sort of addressing which Julian was indicating in his code fragment, or to provide type-specific behavior as a property of the collection.
So you are questioning the value generics in general? Or are we are just seeking for and debating about the optimal way of "simulating" them in the ABL.
but each one of those potentially has type specific behavior.
Not the case with generics. They all have the same behavior - at least in .NET, which is my reference for that concept. So I don't see what you're trying to say here.
Is there any OO language which uses type safe collections?
While I generally approve of type-safeness, it seems to me expensive here. I also question the notion of characteristing the generic collection as needing casts all over the place. An Order object might require an OrderLines collection. The Order object knows perfectly well that the only thing it is putting in the collection are OrderLines. There is no need to worry about the collection magically containing customers instead or one is doing something very wrong.
And, it isn't as if one is going to have casts of OrderLine all over Order since any sensibly written object is going to have one method which does that gets the object and types it correctly.
There may be more casts than collections since a different type of object might also use OrderLines ... but not a huge number. And the saving is one or a very few generic collection objects versus hundreds or thousands of type specific ones ... not to mention a huge temptation for the programmer to put type specific logic in the collection instead of in the parent where it belongs.
Oh no, I have been lookign for generics for a long time, exactly for collections. My old collection classes actually resorted to includes and preprocessors ... wince ... to get the different types of key in the key/value collections. Generics would eliminate that. But this is just generics to get primitive data types, not different objects.
And, yes, I know that one doesn't have to put type specific logic in the collection ... just that it is going to be awfully tempting for somebody to do that if they have a type specific collection. Generic collections can be treated like black boxes with no source available.
Is there any OO language which uses type safe collections?
The .NET framework is full of them. Many of them are implemented using generic collections. Others not. Some of the overloaded with specific methods, typically for adding, removing and accessing members. You may not like it but it makes using the classes very easy and straight forward. And safe.
An Order object might require an OrderLines collection. The Order object knows perfectly well that the only thing it is putting in the collection are OrderLines. There is no need to worry about the collection magically containing customers instead or one is doing something very wrong.
That's exactly what I meant earlier. The Order object might know, that IT did put just OrderLine objects into IT'S OrderLines collection. But nobody else should rely on that. That would be absolutely wrong. No other class should blindly rely on the Order class to do it's job correctly, if the Interface to the OrderLines collection does allow putting System.Windows.Forms.Button or Progress.Lang.AppError objects into the collection.
In dynamic coding with Database objects or ProDatasets you would perform sanity checks on the tables you are getting a handle from somebody elses code, aren't you? A type safe collection is like static database programming. It eliminates the need for a lot of sanity checks. On the good
FOR EACH Customer.
You don't need to test if there's a CustNum field. The compiler does. And that's good.
On a type-safe collection
oOrder:OrderLines[0]:LineNum
does not need any further validation (except for Count >= 0) at runtime on types.
In a non-typed collection the code - from an external object accessing the OrderLine collection - would look like this:
DEFINE VARIABLE oObject AS Progress.Lang.Object NO-UNDO .
oObject = oOrder:OrderLines[0] .
IF TYPE-OF (oObject, OrderLine) THEN
MESSAGE CAST (oObject, OrderLine):LineNum .
Looks like a lot of unnecessary, ugly code and potential traps to me.
I may be old school. but this dog can learn new tricks
I fail to see the problem with "reuse" - surely if you have a generic
collection, which has to force all classes that use it to perform
CASTing (and therefore have to check that the returned object is of
the type requested) it could be considered as much "trouble" as a
single ":Generate" method call to create all the type-safe / strong
type collection classes automatically ?
To me, a large set of machine-generated code outweighs the amount of
extra code needed to verify and validate a generic object from a
generic collection
thump. Mike falls to the floor
See my previous response. For me. putting all verification and
validation code in all parents that use a generic collection in order
to have the same safety as a type-specific collection is urrgghh.
For example, if I have 20 classes using the customer collection, I
would have to repeat the following lines of code 20 times ... so much
for code reuse
def var foo as class progress.object.
def vqr foo1 as class customer.
foo = MyGenericCollection:GetNext() no-error.
if cast(customer,foo) then
do:
foo1 = cast(customer,foo).
message foo1:Name.
end.
instead of
message MyCustomerCollection:Item(1):Name
On 3 January 2011 00:25, Thomas Mercer-Hursh
One thing that's been bugging me about the lazy part of it - who does the checking for a valid collection ? The object using the model ? If so, does that not mean that your code for checking (and also for creating the collection) then appear in several places (each object that requests the collection) or do you put the code into the model itself , or indeed the collection ?
The latter two options present their own problems in that how do they know *how* to populate the collection ? You may have a collection called Contacts beginning with A, or a collection of all Contacts of client B
The .NET framework is full of them
I will do some exploration and consultation and get back to you. But, my criticisms still apply. To be sure, just because something is done in .NET doesn't by itself make it good OO. Moreover, doing it in a framework is not the same thing as doing it in one's own code
That's exactly what I meant earlier. The Order object might know, that IT did put just OrderLine objects into IT'S OrderLines collection. But nobody else should rely on that.
Whom else is it that is accessing the collection. The collection is a property of the Order. Someone might be handed an OrderLine, but when are you going to hand over the whole collection. Yes, there are times I can imagine a collection being accessed by more than one object, but why in the world would you hand a reference to an OrderLine object over to an object which was going to put a form object in there? Yes, I suppose type checking would warn you of this, but yipes your problems are a lot bigger than mixed types if you allow that to happen.
oOrder:OrderLines[0]:LineNum
Where does this idea of treating the collection like an array come from? What are you doing in the colletion to allow this unless you are implementing all collections as single arrays, with the obvious problems that derive from ABL arrays not being dynamic except at instantiation? Not to mention the issues that arise if one adds and deletes to the collection so that either there are holes or the identity of a particular object changes over time and one has to pay the overhead for packing.
DEFINE VARIABLE oObject AS Progress.Lang.Object NO-UNDO .
oObject = oOrder:OrderLines[0] .
IF TYPE-OF (oObject, OrderLine) THEN
MESSAGE CAST (oObject, OrderLine):LineNum .
Actually, I would have
define varialbe CurrentObject as Something no-undo.
CurrentObject = cast(OrderLineCollection:GetNext(), "Something").
Which takes no more lines. Of course, the actual fetch needs to get wrapped with error handling code in both cases. In yours it needs to handle the index not being valid and it may need to handle a null return unless you pack the array... and if you pack the array, what is "next"?. In mine it needs to handle no next being available and possibly the cast error. So?
I don't think there is anything new school about reverting to includes.
Let's compare, based on 1000 types in the application and only one type of collection,
Generic appoach:
* one source code module for collections.
* one object code module for collections.
* need to cast to desired object type (can be same line as get) for each use
* need to handle cast errors if one is going to be paranoid
Mike's approach:
* one set of the operational source code for all collections.
* 1000 source code shells to provide the include
* 1000 object code modules for the collections.
Seems to me that the only need for any extra code in the generic approach is if one is going to be paranoid and Mike's approach results in lots more source and object code modules. Moreover, all those nearly duplicate collections result in lots of code duplication and require using an include.
Apples and oranges. You are using that array notation again.
Where is the advantage to Mike's approach?
We seem to have different approaches. Mine is to provide developers with a framework that prevents them from making mistakes. Strong typed collections are one tool for that. My aim is not to convince everybody of a minimalistic approach for the sake of minimalism.
If you are afraid of using Include files, forget about them. I guess you have the tendency to generate code from EA. So nothing to worry there for you. I don't have a problem with being called old-school because I use include files and EA code generation in combination.
So get me out of another thread that will lead nowhere.
Implementing collections and allowing direct indexed access like this has lots of issues.
First, since arrays are not dynamic after instantiation, one has to either have logic for copying back and forth between two arrays to allow expansion or use a multiple array scheme such as I have described elsewhere. The multiple array approach I thnk is superior because I think there is going to be substantial overhead in copying when the collection gets large. But, the multiple array approach is not going to allow the direct addressing.
Second, allowing direct access to the array like this is exposing implementation details.
Third, there are lots of question about what happens if one adds and deletes to the array. One either leaves holes, in which case O:A[20} can return a null ... and one has to test for that .... or, one has to pack the array after every deletion. Try an array of 10,000 and delete the first item and see how long the copy down takes. You might think this is an odd case, but what about the very common case where one fills a collection and then sequentially processes the contents, deleting each as the processing is complete. Yes, you can avoid the performance penalty by processing from the end forward ... but then you are relying on the implementation again. Moreover, it leads to the potentially confusining possibility that O:A[20] at one time is not the same as O:A[20] at another time since it might have been packed in the meantime. Moreover, if one tries to put back an object previously taken ... perhaps morphed to a different subtype, for example, then how does one know that O:A[20] is the right place to put it back?
Fourth, if you do pack, what is next after O:A[20]? Maybe 21, maybe not. Maybe even something less than 20.
None of these issues exist with generic collections.
There is a certain logical difficulty for an object checking to see if it exists ...
I'm not sure what your difficulty is here. If object A has a possible relationship to a set of object B, but one has decided that this relationship is not necessarily instantiated at the outset, then it seems prefectly logical to test to see if the collection exists or not whenever one needs to access something from the collection. Yeah, it is a tiny bit of overhead, one line, but one has to expect to pay some price for making the instantiation optional.
If you use your direct addressing of the array approach, that might be a lot of places in the code, but if you get the next object in the collection and then do all your processing on the current object, then you only need to put that code at the place where you get the next one. One place.
jmls wrote:
One thing that's been bugging me about the lazy part of it - who does the checking for a valid collection ? The object using the model ? If so, does that not mean that your code for checking (and also for creating the collection) then appear in several places (each object that requests the collection) or do you put the code into the model itself , or indeed the collection ?
The latter two options present their own problems in that how do they know *how* to populate the collection ? You may have a collection called Contacts beginning with A, or a collection of all Contacts of client B
I tend to always make sure that the collection property is created (can be done by the ctor or passed in), and then check :Size (or :Count or what have you) to see whether there's anything in it. Basically, initialisation for the whole object happens at once. This way, once NEW() has run, the object is in a usable state.
This ties in with the fact that I am beginning to tend towards the school of thought that says that a null/invalid object is always a bad thing - and if you want to indicate the fact that an object is null use a NullObject. I think that this makes code a little cleaner (since now "all" we have to deal with are exceptional conditions). But I only say that to justify the above to a degree, and not to split this thread (again?)
-- peter
One problem with initializing the collection, but leaving it empty is that one then needs a separate flag to indicate whether the collection has been filled since Size = 0 is a perfectly valid state.
Why do you care when it has been filled (or even if)? If a collection has zero items it should be processed the same way, regardless of whether it has 0 items because it is new or whether it has 0 items because it has been dealt with completely. If you really, really care whether the an object has been init'ed or not, add a (n explicit) flag to that effect; but deducing that an object has been init'ed because of the state of another seems hinky/shonky to me.
-- peter
Given an order with 100 lines and a desire to lazy instantiate because the lines are not needed for all processing.
Given a second order which has no lines, e.g., it has just been created and the lines have not been processed yet.
Omit creating the collection and one can determine by a test of Collection = ? whether it has been instantiated or not. The result is that on first access one notices that it is not initialized and goes to initialize it producing collections with 100 and 0 members respectively.
Create the collection and not fill it and the two collections appear identical, i.e., Size = 0 so one needs a separate flag to determine that one must go fill the collection in the first case and probably try to fill it in the second case.
I agree that one should know that an object is valid before creating it and this implies creating only logically complete objects, but this does not mean that one has to initialize all possible relations. In particular, an object may have multiple relations, only one or some of which are even meaningful in a particular context.
I asked my OO mentor about type safe conventions... he is admittedly a purist who comes down hard on the way that a lot of OO code is done in practice, but given that this is a person whose background is creating highly performance critical applications by translation, i.e., by getting the UML right and then creating code from that by translation I tend to think that he has demonstrated and experienced that one can do things right and it works out best in the long run. His full response is a little long, so e-mail me if you want the details and I will only quote limited parts here.
...
By definition, if one navigates R1 from ObjectA one should only get ObjectB objects and that is what the OOPL type systems support. When one implements OO relationships correctly -- which I contend PSC has not done yet -- and navigates them properly one always gets type safety for the OOPL R1 collection because there must be a type declaration of the correct type in the method that navigates to the collection.
...
Note that the cast is fine because one is implementing standardized collections and looking at the OOA/D model; the cast must be to ObjectB* because that is what R1 says.
The OO paradigm also demands that myObjectBCollection must be instantiated only for the ObjectA instance in hand. When anyone is adding objects to it, they necessarily must have the ObjectA in hand and they use the myObjectBCollection reference to access it.
...
So when someone adds an object to the collection, they would have to be remarkably dense to look at the Class Model and add the wrong object (whose type they must also have in hand). Note that the casts are symmetric around instantiation and navigation of R1. Even if they were that careless, they would have to explicitly define the wrong types statically in one or both of the methods so the problem would become clear very quickly when all the objects in the collection were always of the wrongtype.
Thus, "type safe" collections are only useful if you have people coding in OOPLs who don't understand the OO paradigm, don't think in terms of relationship navigation for collaboration, don't isolate instantiation, and don't employ generic implementation and navigation techniques for relationship infrastructure. IOW, teach the developers how to use OOA/D properly and you won't need "type safe" collections.
...
Not surprising. .NET is, at best, object-based and that is charitable. MS has been implementing "OO" development environments solely based on their ease of development for decades. Looking at their stuff it is hard to imagine anyone at MS has ever read a book on OOA/D.
...
Historically MS has joined standards groups as a hardball marketing tool. Their goal is to get the standards group to adopt whatever MS does and that puts their competitors at a disadvantage. When a group like OMG refuses to go along with the plan, MS leaves the group. They essentially trashed OMG's Motif UI doing that because OMG wouldn't modify it to emulate Windows. Alas, Windows was so popular that Motif died, which is sad because it was a pretty good standard. Now we have to live with idiocy like moving the cursor in an 'L' path when trying to select submenus.
They did the same thing with the OO paradigm. They were originally active in the OO committees, including UML. However, they left in a huff when they discovered nobody liked the way they did MFC, COM, DCOM, and ActiveX. They are back again trying the same crap with the MOF initiatives.
...
BTW, there are other ways to do such "type safe" collections if one insists on doing them. You can override Object and add an attribute for a type code that is set by the ctor for that class. Then the collection can have a similar attribute that is set when the collection itself is instantiated. The add(...) then checks for a match in the codes. [Note that this is effectively doing what a dynamically bound OOPL does to check types. The housekeeping just isn't visible.]
After a couple more outside discussions and a bit of thinking about this, I am going to shift my position a bit.
First, I am going to continue to say that I don't really think type safe collections are necessary because of the way that collections should be used. I.e., if a collection is supposed to contain type X then a component using that collection should know it has an object of type X before adding it. Any unexpected type mixing in a collection is indicative that one has a mess.
But, if one had generics, then I can see that one could have a single generic source module which performed the cast prior to the return and N object modules, one for each type actually used (presumably created as needed by the compiler), and that this would be acceptable from a code management point of view and advantageous in limiting the need for casts.
If one was sure of getting generics in ABL, then I can see that the lnclude approach could be a bandaid to produce a similar effect at the expense of source code mess because when we got generics, one could simply modify the base and delete all the definitions containing the includes.
But, at this point I am not so sure about getting generics ... despite multiple use cases ... and so I'm not sure whether the value offsets the mess.