While I can't say that I have completely sorted out all the highly layered code for data access in AutoEdge, I have a couple of tentative observations which I would like to query about.
One is that it seems that there is an assumption that the properties of an "object", i.e., business entity in this case, correspond directly to the fields in a database table, i.e., that there is no complex OO<->RDBMS mapping. This is what allows a shared superprocedure to contain update and fill code because there is no need to change field names, compose more complex fields, or even to combine data from multiple files.
One of the results of this is that de-normalization such as translating codes into descriptions is deferred to a separate step after the dataset is completely filled. In fact, in the case of becar.p, it seems that there are two de-normalization layers, one in dacar.p and the other which is actually called in becar.p to get the description, although the code itself is in dacar.p
While I am all for reusing code, I find myself wondering whether this is a better design that including the de-normalization in the actual fill process. To be sure, this would mean no shared code for this aspect (although an OO version could use an interface for standardization of entry points), but it also means that one would be using the capabilities of the ProDataSet more fully and the code might be less confusing since it would have fewer layers.
Similarly, it seems like there is an unfortunate inherent limitation because centralized FindWhere logic can only make selections based on fields in the main table and can't, for example, select the records from one table based on a join with another table.
Am I seeing this correctly?