As discussed in another thread, I am splitting my classes with generated code inherited by custom classes
My library is split this way, and the valueobjects as well.
so, I have four classes
LibParent --Inherits--> LibAuto
VOParent --Inherits--> VOAuto
(xxxAuto is automatically generated code)
I have the Get() method as a static, and Save(), and Remove() methods on the VOAuto as normal methods, so I can say in code
VOParent:Get("foo") etc
The Get() method is a placeholder for the Get() method in the LibAuto code, i.e
METHOD STATIC PUBLIC VOParent Get(p_Value AS CHAR):
RETURN LibParent:Get(p_Value).
END METHOD.
so, this goes of and returns an object representing the record / structure / whatever the libray produces
However, I want the same for Save().
METHOD PUBLIC VOID Save():
RETURN LibParent:Save(THIS-OBJECT).
END METHOD.
*however*, in the abstract class, THIS-OBJECT is obviously the VOAuto class, not the VOParent and I get a mismatched parameter
I presume that the only realistic way past this is to define all the parameters as Progress.lang.Object and cast them to the correct object in the library methods ?
Any reason why you're not defining all parameters based on the abstract super classes?
yes - the :Save() and :Remove() methods are meant to act on the
current object, not a parameter
so, without the super class I have
def var a as class ValueObject.foo.
a = ValueObject.foo:Get("bar"). /* get is a static method that returns
a ValueObject.foo instance */
a:Someproperty = "SomeValue".
a:Save().
or
def var a as class ValueObject.foo.
a = ValueObject.foo:Get("bar"). /* get is a static method that returns
a ValueObject.foo instance */
a:Remove().
Based on this brief description, there are a bunch of questions which come to mind which have little to do with your immediate question ... but they might influence the answer.
Is one or the other of these a factory which productes objects or is it the self-filling object itself?
Why exactly is any of this static?
Does "value object" mean simply an object with all the data members associated with some entity, but not the behavior so that this information is being turned into a full blown object elsewhere?
Are you really sure that you don't want to enhance your generator to produce code which includes anything custom instead of introducing inheritance? Is xxxAuto specific to an entity type, i.e., does there more than one child of either of these classes?
Wow. A bunch of questions. Most of which I have no idea what you are
talking about
1) A Factory ? If you mean
http://en.wikipedia.org/wiki/Factory_method_pattern , then no. All
objects are created by the generator as a specific class
2) Static - because I don't want to have to create an object to get it
def var a as someclass.
a = new someclass()
a:get("foo")
vs
def var a as someclass.
a = someclass:get("foo")
right or wrong, I prefer the second mechanism, which requires a static method
3) "valueObject" - yes, exactly. This object only contains properties
or the bare minimum methods required to crud (which are actually
passed along to the "library" which just works on objects of that
class passed as parameters
4) Custom code - one of my desires and requirements is that custom
code is written, and compiled, in the OEA to take advantage of all the
IDE goodies (code completion, syntax checking etc
On 2 March 2011 21:57, Thomas Mercer-Hursh
Most of which I have no idea what you are talking about
Isn't that one of my rôles in life?
There are some people who only use the word factory to mean something that produces many types of objects, but I am using it in the broader sense of an object whose job it is to create one or more object types. But, one would expect to use a single factory to create all objects of a particular type. Thus, the same factory might use either a value object from a DA layer source or a value object from the UI or a message to create an instance of the object type for which it was designed.
Err, but the reason for static is NOT convenience in coding. Making something static has consequences, not the least of which is that it is not going away. I would not use it casually ... especially not to save typing.
So, does your "library" create objects? I.e., is the value object being produced by the DA layer and then passed to the BL layer where you build an object and work with it? If so, good for you!
Where you write custom code does not determine how you use it. E.g., one can use model-to-code translation to produce a standard component object, add custom code in OEA, and then reverse engineer to bring the custom code back into the model where it will be included in any subsequent translations. This has some problems too, but I would strongly prefer it to sticking in an inheritance hierarchy in which there was only one parent and one child. That makes nonsense of the whole idea of generalization and would get your fingers slapped in an OO code review.
Where you write custom code does not determine how you use it. E.g., one can use model-to-code
translation to produce a standard component object, add custom code in OEA, and then reverse
engineer to bring the custom code back into the model where it will be included in any subsequent
translations.
Not everybody has the time and resources to develop this. Is there anything useful as a starting point for the full round trip engineering?
+reverse engineer to bring the custom code back into the model where
it will be included in any subsequent translations+
I was originally doing something along the lines of that by checking
for the custom code within @ tags. I struggle to see how you could
possibly add code in any place at any time and have the round-trip
identify generated vs custom code (or worse, modified generated code)
, save it , and regenerate model with all of the custom code in the
right place.
+but I would strongly prefer it to sticking in an inheritance
hierarchy in which there was only one parent and one child. That
makes nonsense of the whole idea of generalization and would get your
fingers slapped in an OO code review+
Why ? There may be extra properties that the developer may wish to add
to the parent that is not automatically generated by the code.
On 2 March 2011 23:15, Thomas Mercer-Hursh
See my original post. I had a form of round-trip working, but it was
within custom tags, something that Peter didn't like
See my original post. I had a form of round-trip working, but it was
within custom tags, something that Peter didn't like
What Dr. Thomas promotes - to my understanding - is to import the custom code into the model, making it part of the model. Inclunding custom methods, properties and eventually modifications to generated code.
That's different from merging changes to the model with existing code.
And requires the parser (code importer) to understand more about the language than annotations.
Yeah, I think that you are right. However, I am trying to produce a standard
model for all table objects, so the custom code is unique for each table,
not the model. Before anyone jumps up, when I say table, I mean business
component
On 3 Mar 2011 07:39, "Mike Fechner"
Phil's round trip code is on PSDN, but yes, there is no immediately available MDA code to go with it. Stay tuned .... at this point I am mostly trying to keep a placeholder for people to be thinking "That's what I would like to do if I didn't have to do it all myself". Which said, developing transforms for one object type shouldn't be a big deal. It might be a lot less of a deal than reworking the whole system later because one had put in a non-functional generalization hierarchy in every class.
Yes, what happens with this kind of reverse engineering cycle is that the initial generation is of a fairly basic object ... all the stuff you know you want based on the model, but no custom code. Custom code is captured as text associated with the method and re-inserted on the next generation. In the simple form, no method starts out with any code, just a signature and an empty block. Thus, the custom code is everything in between. If you try to have the translation fill in some standard code and also allow custom code, then you certainly could have a problem separating the two on the read back in. Since Phil hasn't supplied source, as I recall, that code would not work for you. But, if you put the standard code into the model as text, then it could work, although then you aren't going to have the easy evolution of the standard bits.
Absent a working UML/MDA solution, you can use something more like the SDD approach where there is a template and specification. The specification chooses among standard options which come from the template and includes custom code which is injected at specified places in the generated code.
Because the whole idea of a generalization hierarchy is that it generalizes something ... or better, so set of somethings. Think in terms of a Venn diagram. On the diagram are real world entities in the problem space. Draw a circle around all of the entities which have the same knowledge properties and behavior. That is a class. Notice that some group of classes has some common knowledge or behavior. Draw a circle around those and associate the common knowledge and behavior with the bigger circle and the unique knowledge and behavior with the individual classes. That is a generalization hierarchy where the big circle is the super and the individual classes are the subs.
What you are doing is drawing one circle inside another. There is not only just a single class instide, so nothing is being generalized, but the smaller class can even be empty because there is no custom code! Eeek!
I've not looked at Phils code yet, but have this idea in my head that
I want to share so we can all build on it if it's a workable idea ...
Using code generators, rather than catch all dynamic code
Statement: auto-generated code is not perfect. there's always a case
of where someone needs to modify the generated code (extra properties,
methods etc). This code is not generic, and thus applies only to the
object in question, so it's not easy to build into the code generators
Problem: if you re-run the code generators, your custom code is overwritten
Solution #1: put custom code between tags. When rebuilding the file,
first grab the code between these tags, and when rebuilding, insert at
the appropriate point. However, this all falls over if some "code
dinosaur" - thanks Peter !) removes a tag
Solution #2: put the generated code into a separate abstract class
which is inherited by the custom class. Not liked by oo purists
("rapped across the fingers" - thanks Thomas!)
Solution #3: use some form of versioning and diff / merge to save the
custom changes. I haven't yet got this straight in my head, but it
goes something along these lines:
a) Each version of the code generator is given a specific vxx number (v1,v2 etc)
b) code is generated for the first time, copied somewhere and given a v1 tag
c) code is modified by the user
d) code is generated for the second time.
e) If the user has not modified the code then crc checks will pass,
and can be ignored
f) if the user has modified the code, do a three-way diff with the
v1, the modified code and the new generated code
g) this should generate a new file, with the new changes from the code
generator and the custom changes all included
I know that this is a ramble, but I think that there is something
here. Just need to make it work
On 3 March 2011 17:46, Thomas Mercer-Hursh
Solution 4: Explore MDA ... the modern solution.
Solution 5: Explore the SDD approach. This uses templates plus a specification file for each program to be generated. The specification file fulfills three purposes. One, it provide the data needed to create a particular result, e.g., table and field names or whatever. Two, it provides switches for alternate behavior selected from options provided in the template. This means that one template can be much, much richer since there are a lot of variations in this kind of code which are used over and over again and don't need custom code to be written each time. Three, it provides custom code associcated with predefined "hooks" in the template. If a given hook is empty, nothing is generated for that. If a hook is defined, then the code associated with the hook is inserted at the defined location in the generated source.
Solution 4 is, I think, the right way to go long term, but you probably won't have a working version next week. Solution 5 is enormously flexible and one can modify the template and regenerate quickly to produce a fresh copy of all code with the modifications included whereever they are supposed to be. Drives version control nuts, of course, but is very powerful and fast.
I don't think #4 is for me. Seems far too complex for my minimalistic
requirements
I did, however, have a laugh reading on
http://en.wikipedia.org/wiki/Model-driven_architecture
"Forrester Research declaring MDA to be "D.O.A." in 2006"
#5 is more intriguing, as my current template/tag system is similar in
concept. So, you write the custom code in the specification file, and
your generator reads the template, finds hooks, calls hooks and
inserts any code. I presume that the specification file dictates which
template(s) to use.
The downside to this is that I like writing code in the OEA, with
all the flashy stuff like code completion, colour syntax, syntax
checking etc etc. I wonder if I could write the specification as an
include file ? Hmm, looks like it's possible ... syntax check is
screwed of course.
On 3 March 2011 18:54, Thomas Mercer-Hursh
I would say that the Wiki article needs updating! MDA is alive and well and the standard of practice in certain industries. It is, for example, SoP in areas like telephony and R-T/E. It just hasn't caught on so much in enterprise class business applications ... which, of course, is a huge opportunity for productivity gains, improved quality, and nimble responsiveness. But, I understand that it can seem like a big jump to get started. Stay tuned ... BTW, it is something which Progress Professional Services uses routinely in their transformation projects.
As for NuevoSDD, it certainly is an approach that has potential. I would guess I have written 1-1.2 million lines of ABL that way. But, it can certainly use an update.
As for editting in an intelligent editor, there are several ways to use the tool. For things like a table name, one really just wants that to be in the specification. Likewise, option selections were just Yes or a number or a word. In the beginning, we put all the code right in the specification file, but as time went on we tended to take larger chunks of code, put them in include files in the same directory, and then put an include file reference as the argument to the hook. With a smarter macro processor one could probably switch input temporarily to the include file in order to produce a single stream of code as the output. Back in the early 90s I didn't mind include files the way I do now.
Even with the primitive tools I used back then, the generation was nearly instantaneous. We could regenerate the whole system of something well in excess of a million lines of target code in maybe 20 minutes, even with the slow disks and processors of the time. Regenerating one program or a directory full of programs was essentially instantaneous. One can then, of course, load and look at the result of the expansion, although I suppose one might annoy OEA by continually changing the file from outside ... lots of refreshes, I suppose.
do you have any examples of the specifications or templates hanging around ?
On 3 March 2011 20:18, Thomas Mercer-Hursh
Drop me a line so I am sure I have your current e-mail and I will send you the marketing ppt and see if I can't dig up a few samples.
Phil's round trip code is on PSDN,
Did you use it? In a real world environment? I tried it.
That code is:
- undocumented
- unsupported
- not updated since a while
- closed source (for the plugin)
- based on a meta model that is not documented too well
Doesn't sound too charming for the real world.
but yes, there is no immediately available MDA code to go with it. Stay tuned .... at this point I am mostly trying to keep a placeholder for people to be thinking "That's what I would like to do if I didn't have to do it all myself".
Does this become your new standard disclaimer? That's new to me
Which said, developing transforms for one object type shouldn't be a big deal. It might be a lot less of a deal than reworking the whole system later because one had put in a non-functional generalization hierarchy in every class.
Ok, but to me that sounds more than SDD then pure ABL round trip engineering.
> It just hasn't caught on so much in enterprise class business applications ... which, of course, is a huge opportunity for productivity gains, improved quality,
I'm not against MDA, don't get me wrong! But I'm tempted to remind, that those industries are using different tools and different languages. I am convinced (without having numbers, do you?) that ABL developers without MDA are more productive than developers in other languages. I am also convinced that the overall productivity with MDA in the ABL, Java and .NET etc. a team may optimally achieve is similar. So in the end, I see a less potential productivity win for the ABL with MDA.
Did I suggest that Phil's tool was production ready? .... believe me, I have been arguing for a long time with many folks that the source for it and for the .df reader should be released so that someone could evolve and extend it into a real usable tool, but thus far I have failed every time. I am pretty sure that PPS has a more evolved version which they are using but not giving us in the belief that no one will buy PPS services if they gave away such an important tool.
So, I am currently looking at ideas, tools, and strategies to make this real. Interest from the market would help.
But, no, the complexity of delivering a complete model-to-code solution is substantially greater, dramatically greater, even, than developing a solution which only covers one type of object
Did I suggest that Phil's tool was production ready?
No, but for me it was important to mention it's weakness and incompleteness. Mentioning the availability of such tools alone in the context of such threads might leave the impression that such a production tool is available or at least close.
.... believe me, I have been arguing for a long time with many folks that the source for it and for the .df reader should be released so that someone could evolve and extend it into a real usable tool, but thus far I have failed every time.
I tried that same, unfortunately with no success.
But, no, the complexity of delivering a complete model-to-code solution is substantially greater, dramatically greater, even, than developing a solution which only covers one type of object
100% agreement.
There are several issues in your comments, so let me see if I can sort out some possible answers.
There have been some dubious productivity studies over the years. Back in the 80s, it was typical for people to talk about 10X gains of ABL over something like C. But, since then, significant parts of ABL have gotten more 3GLish and 3GLs have developed some 4GLish features and, perhaps more importantly, there are now lots of frameworks and libraries such that there is code one just doesn't have to write. Lately, I have heard numbers more like 3X gain. But, this assumes that neither is using MDA.
A 3GL shop using MDA is likely to be significantly *more* productive than an ABL shop that hand writes code. Fortunately, we don't see that a lot except in specialized applications so no one notices. My belief is that a strong MDA push with ABL would put ABL again way ahead in productivity and, perhaps more importantly, in nimble response to changed requirements. This is exactly what Forrester et all are telling us is critical and where the Java is a dead end for enterprise business apps argument comes from.
Yes, if one had two shops, both using top of the line MDA approaches, one generating ABL and the other generating some 3GL, the productivity gap might not be large ... but at least the ABL shop wouldn't be coming up short and, I think, the ABL shop would still have an advantage in debugging, analysis, and understanding.
I am pretty sure that PPS has a more evolved version which they are using but not giving us in the belief that no one will buy PPS services if they gave away such an important tool.
Bye the way, did you invite them to demo that during PUG Challenge?
Yes, if one had two shops, both using top of the line MDA approaches, one generating ABL and the other generating some 3GL, the productivity gap might not be large ...
Sounds much like my previous post, if you ask me.
I tried both to get a session on CloudPoint and a workshop on CloudPoint. But, Phil and his team are too busy with billable work and no one outside of PPS knows enough.
For a complete view, one has to consider the alternatives and also other trends.
Among the trends there is the use of tools which take over some part of the application and allow a flavor of model-to-code within the domain which they take over. One of the good topical examples is BPM and probably CEP. In both cases, there is probably some added value as well, i.e., things one likely couldn't or wouldn't do without the tool, but there is also high productivity and nimble responsiveness because one is working largely at a model level. Both ABL and the 3GLs can use such tools and to the extent they do, that part of the application is going to be equivalent.
For the part which is actually code, it seems to me that there are four possibilities.
1. Neither uses MDA. This is mostly where we are now and ABL has something like a 3X productivity advantage, maybe less if the 3GL is used with strong tool support.
2. Both use MDA. We may get here eventually, but it seems like a rare event for some time to come. ABL might have a slight productivity advantage because the generated code will be easier to read and understand.
3. The 3GL uses MDA, but ABL doesn't. Here I think the 3GL is at a productivity advantage, possibly substantial. Fortunately, it is rare so far in enterprise business applications.
4. ABL uses MDA, but the 3GL doesn't. This requires development of ABL MDA, but if that were to occur, then it could become a common comparision. ABL would have a substantial productivity advantage, possibly 10-30X.
Where would you rather be? 1 is status quo and I think dangerous long term. 3 is clearly ugly. 2 isn't very comfortable, but certainly better than 3. 4 seems to me to be very attractive.
But, Phil and his team are too busy with billable work and no one outside of PPS knows enough.
That's a pitty for the community - and Progress Software (not PPS) should have a higher interest here than protecting a couple of billable days.
For PPS it reminds me a lot of: "The man who stops advertising to save money, is like the man who stops the clock to save time", Henry Ford.
Where would you rather be? 1 is status quo and I think dangerous long term. 3 is clearly ugly. 2 isn't very comfortable, but certainly better than 3. 4 seems to me to be very attractive.
I am much in favor of productivity as well. Productivity rules - that's why we offer tools to increase productivity.
But number four is all wishful thinking - and we must face the reality that it's not there, no strong enough vendor seems to be interested in developing such a tool.
But the whole debate about that thinking doesn't help people like Julian and customers that I work with in his actual question that raised this thread. I believe that UML (or any other kind of visualization) might support him here in understanding the best model, would have helped to solve the issue (remember, that was a simple invalid type issue when passing this-object, maybe a deeper design flaw).
UML is useful to recognize or at least visualize issues like this and I like to use it for that. But that's different from the actual code generation as like round trip engineering is different from UML design: You can use UML without round trip engineering and RTE without UML.
I must say, I've lost track in this thread, if the actual question was already answered and if Julian got further or not.
+I must say, I've lost track in this thread, if the actual question was already answered and if Julian got further or not.+
no. yes. maybe ;)
I've reverted back to my original model (@ placeholders being inserted for generated code, and preserved for custom code)
I was trying to work around so many problems it just didn't any sense to continue.
Synopsis of the problem
Originally:
A : C
A calling a method in C
A passes itself C:Save(THIS-OBJECT) to C (METHOD PUBLIC VOID Save(p_Object AS A)
However, I wanted to separate out user code from generated code, so introduced B which inherits A
A->B : C
B calling a method in C
A inherits B, which has the call to C C:Save(THIS-OBJECT) to C (METHOD PUBLIC VOID Save(p_Object AS B)
this now causes a problem, because *during compliation* THIS-OBJECT in A refers to the the A class, not the B class
If I left the code as it originally stood in C, and kept the parameter as A, I would then not get any properties defined in B being passed along.
So, after many convoluted attempts, I finally ended up with code looking like this
B->A : C
Ovrried Save in B:Save(THIS-OBJECT)->A:Save(p_Object AS B) C:Save(p_Object) to C (METHOD PUBLIC VOID Save(p_Object AS B)
It didn't look right, it didn't feel right, and it introduced generated code back into B which what was I was trying to avoid in the first place ;)
A passes itself C:Save(THIS-OBJECT) to C (METHOD PUBLIC VOID Save(p_Object AS A)A inherits B, which has the call to C C:Save(THIS-OBJECT) to C (METHOD PUBLIC VOID Save(p_Object AS B)
Can you share some code skeletons? Maybe it will be easier to follow then?
I still think that by defining the parameters as the abstract classes it should work.
this now causes a problem, because *during compliation* THIS-OBJECT in A refers to the the A class, not the B class
If I left the code as it originally stood in C, and kept the parameter as A, I would then not get any properties defined in B being passed along.
Why do you say this? You can't see 'em in (since you're looking at the object reference through the lens of type A not B), but they're still there on the object reference.
B->A : C
Ovrried Save in B:Save(THIS-OBJECT)->A:Save(p_Object AS B) C:Save(p_Object) to C (METHOD PUBLIC VOID Save(p_Object AS B)
If B inherits from A, then you can pass it to a method that expects A. Or am I missing something?
So Save() in B can look like this, and the compiler is happy and so hopefully are you
class A:
method public void Save(po as A):
/* saved by the bell, mate! */
end method.
end class.
class B inherits A:
method public void Save(po as B):
super:Save(po).
end method.
end class.
I am also a little confused why you can't simply use inheritance (with abstract classes or not) for this. Surely, if you want C's Save() method to know about something specific in B, you'll define an overload for Save(B); if you only care that the object reference passed is A, the Save(A) is the right mechanism. "A" can be an abstract type or an interface (I like interfaces myself) , but that depends on the implementation (but since you're passing it around, i'd vote interface).
-- peter
I was trying to be smart, and have the crud code in a separate library.
So my ValueObject was just a bunch of properties and a couple of methods to get , save and remove
def var a as vo1.
a = vo1:get("guid").
/*manipulate a: properties */
a:Save().
note that I didn't have any parameters to a:Save() because I *already* have the object to save in a :)
Now, when passing control onto the library to do the actual table reading/writing I save to say (in a)
mylib1:Save(THIS-OBJECT)
so now, if I were to create a class B that inherits A , within A I have the THIS-OBJECT pointing to the A class, not the B class.
So, you are right, I then have to create a method in B that states THIS-OBJECT:Save(THIS-OBJECT) and modify A to accecpt a parameter
Which defeated the original purpose of not having any auto-generated code in class B
jmls wrote:
I was trying to be smart, and have the crud code in a separate library.
So my ValueObject was just a bunch of properties and a couple of methods to get , save and remove
def var a as vo1.
a = vo1:get("guid").
/*manipulate a: properties */
a:Save().
note that I didn't have any parameters to a:Save() because I *already* have the object to save in a
Now, when passing control onto the library to do the actual table reading/writing I save to say (in a)
mylib1:Save(THIS-OBJECT)
so now, if I were to create a class B that inherits A , within A I have the THIS-OBJECT pointing to the A class, not the B class.
So, you are right, I then have to create a method in B that states THIS-OBJECT:Save(THIS-OBJECT) and modify A to accecpt a parameter
Which defeated the original purpose of not having any auto-generated code in class B
Is the code below somewhat like what you're thinking?
One question is whether the Save() method on the library requires a parameter of type B. But then you'd have a B-Lib with the appropriate parameter. Or am I completely misunderstanding you? (which is eminently possible)
runner.p
def var o as B.
o = new B().
o:Save().
/* shows that A-Lib has object of type B */
class A-Lib:
method static public void SAVE(input po as A):
message 'saving (defined as A)' po po:getclass():typename.
if type-of(po, B) then
message cast(po, B):PropInBee.
end.
end class.
class A:
method public void Save():
A-Lib:Save(this-object).
end method.
end class.
class B inherits A:
def public property PropInBee as char init 'PropInBee' no-undo get. set.
end class.
-- peter
yes, but with large reservations
1) it actually brings up a question I've been meaning to ask for a
while: if A is inheriting B, and I use a B object as a parameter, is
all of A's extra properties available if I recast B into A
2) I would hate to have to code a library which converts one parameter
into another to cope with the fact that the parameter is actually
meant to be a super class of B
1) it actually brings up a question I've been meaning to ask for a
while: if A is inheriting B, and I use a B object as a parameter, is
all of A's extra properties available if I recast B into A
Yes.
The object won't get converted. You're just looking at the same object using a different pair of glasses.
2) I would hate to have to code a library which converts one parameter
into another to cope with the fact that the parameter is actually
meant to be a super class of B
CAST and TYPE-OF should be sufficient.
mikefe wrote:
1) it actually brings up a question I've been meaning to ask for a
while: if A is inheriting B, and I use a B object as a parameter, is
all of A's extra properties available if I recast B into A
Yes.
The object won't get converted. You're just looking at the same object using a different pair of glasses.
Great analogy, Mike!
-- peter
Which defeated the original purpose of not having any auto-generated code in class B
And, which illustrates why you don't want to be using inheritance here. A shouldn't know about anything other than A. Giving it a B and expecting it to act upon it as a B is counter to the whole principle of segregating responsibilities in an inheritance tree. There is only one responsibility here. Fix it.
Julian, please, A does not know that B exists. It should not cast anything to B. It should deal only with its own properties. B knows that A exists because it inherits from it. It can therefore access any knowledge and behavior of A BUT THIS IS NOT RECIPROCAL!
Don't Do That!™
I agree. that's why I didn't like it. It just doesn't feel right.
I feel like Niles in Frasier, about to have a nose-bleed because it is
so wrong
On 4 March 2011 17:37, Thomas Mercer-Hursh
It certainly isn't a problem unique to PSC. It is very typical for professional services organizations to be very protective of their intellectual property, apparently based on the idea that if they let *any* of it out of their secret stash than no one would hire them .... as opposed, for example, to recognizing that letting people know something about what one is doing and how it is being accomplished conveys understanding and credability. PPS is perfectly happy to do marketing, but it is marketing about the service products they provide, not about how they will accomplish that service if you hire them.
The part that is more curious about this to me is that there doesn't seem to be a regular communication channel with the R&D group. One would think that the challenges, problems, successes, ideas, etc. generated by PPS doing real work for real clients would be a very valuable source of information for R&D. So important, in fact, that I could see having an R&D person whose full time job was monitoring what was going on with PPS as an input for R&D directions.
jmls wrote:
Yeah, I think that you are right. However, I am trying to produce a standard
model for all table objects, so the custom code is unique for each table,
not the model. Before anyone jumps up, when I say table, I mean business
component
I think that the question here is who will perform the work on the value objects/components. The generated library? Or does it pass the task off to a more-specialised class?
I think that by their nature - each table/component is unique and without any common ancestors - you will have to have either a specialised per-table "save" routine, or you will have to do some form of reflection.
So in the code that does the actual save to the DB ("DataAccess layer") you could loop through the DB Buffer's NUM-FIELDS/BUFFER-FIELD() and do a DYNAMIC-INVOKE() on a method called "Get" or "Set" (since there's no reflection on properties in the current release). These Getters and Setters would be auto-generated in the table value objects.
You can have your custom, generated objects implent a IAmTable interfact that the Save() method in the library takes, so that you can have some type-safety. IAmTable can be an empty interface if needed - just to provide the strong-typing.
-- peter
The four options are not all our choice. We only get to choose about the ABL side of this. The difference between #2 and #4 is what the competition is doing. We can hope they are slow, but we can't control it. The difference between #1 and #3 is that ABL looks bad by comparison if the competition is taking advantage of something we are not.
This is not theoretical rocket science. To some extent, iMo and PPS are already doing it. To the extent I know what they are doing, they aren't choosing to make the best of it, but that is because of the vision and orientation of the organizations, not because of what is possible. The tools are there. One simply has to decide to use them. It is far from trivial to decide what and how to do, but, hey, it isn't trivial to figure out the best approach for TDE or MT databases either ... both of which I think are significantly more difficult than this problem.
So, as far as the initial question in this thread, the answer is Don't Do That!˜
The part that is more curious about this to me is that there doesn't seem to be a regular communication channel with the R&D group.
Sad but true. Having been part of that team for a couple of years (left PSC 11 years ago) this does not surprise me at all.