When does Data Access become Business Logic?

Posted by ojfoggin on 06-Aug-2009 14:45

We had the Progress guys at our work today presenting us with Sonic et al and the first step we have as a result of the meeting is to move towards an OERA architecture.

At the moment we are at a place that works but we would like to move to a better practise architecture.

ATM we have the GUI, an AppServer (this contains procedures with both Business Logic (BL) and Data Access(DA)) and the Database (Progress database).  I am trying to get a better understanding of the separation of a Business Logic layer and a Data Access layer and the line between the two seems very blurry.  Also we are going to be moving towards using ProDataSets.

My confusion comes when I try to implement the set up.

Say (for convenience) we had already set up the system and had Sonic ESB set up etc...

We create a Crystal report to show all of the customers for region 2 and the total value of the orders they had made over the past month.

This then runs through Sonic and eventually gets to our AppServer Business Logic layer and expects an XML file to be output back into Sonic and then out to the Crystal Report.

The output needs to be customer acc numbers, customer name and total order value for the past month.

What then is the interaction between the BL layer and the DA layer and where does the ProDataSet come in?

Initially I had the theory that the DA layer should pass back temp tables, thus...

BL Layer: Go to DA and get all customers for region 2.

DA layer: Query database, create temp table, pass temp table to BL layer.

BL Layer: For each tt-customer, go to DA layer and get all the sales for the past month.

DA Layer: Query database, create temp table, pass temp table to BL Layer.

BL Layer: For each tt-order accumulate the orders to get a total value.

end for each tt-order.

end for each tt-customer.

Define ProDataSet.

Fill ProDataSet.

Output proDataSet.

My manager suggested that this could be done differently however, thus...

BL Layer: Go to DA and get all customers for region 2.

DA Layer: Query DB, create temp-table, pass temp table to BL layer.

BL Layer: For each tt-customer, go to DA and get the total value of sales for that month for this customer.

DA Layer: Query DB, accumulate total orders, pass back total.

BL Layer: end for each tt-customer.

output ProDataSet.

I debated that if you were going to do that then you may as well do this...

BL Layer: Go to DA and get all customers for region 2 and their total order values for the past month in the form of a ProDataSet.

DA Layer: Query DB, Get customers, calculate totals, create ProDataSet, output to BL Layer.

BL Layer: output PDS.

This however makes the BL Layer redundant and we're back where we started but with a void layer in between.

My question is what traffic should be passed between the BL and the DA?  How much processing can the DA do?  (In my final example the DA layer is only accessing data and presenting it in a different way).

In my head I believe that the DA should perform DB queries and copy the results into a temp table and pass the temp table out to the BL.  Nothing more.  Is this correct?

Any help or advice is appreciated.

Thanks

Oliver

All Replies

Posted by Mike Ormerod on 06-Aug-2009 15:54

Hi Oliver

Firstly whether you use a Temp-Table, or a ProDataset in many ways is not the issue as either one is simply the method of transporting the data around.  Havig said that I think ProDataset tend to be the prefered method as they offer extra functionality over just a temp-table.  If you've not already done so I would suggest looking at a couple of documents.  Especially look for the 'Architecture Made Simple' series of docs as they discuss many of the issues you raise, such as what is the role of the BE, the DA etc.  Iniially you will see in Part-1 that Temp Tables are used, then in Part-2 these are replaced by ProDataSets.

Ata high level, the DA components are those that understand the physical database and as such can perform the queries to fill the temp table or ProDataset.  It is the role of the Business Entity to then perform any business logic upon that data. 

If you're considering Sonic, also check out the documentation around the native invocaion method between Sonic and OpenEdge.  If I remember correctly this shows how to use a ProDataset in this context.

HTH

Mike

Posted by Thomas Mercer-Hursh on 06-Aug-2009 16:25

I originally respnded to this under the post on the ABL forum, but I am moving the response here:

First, good for you for asking questions.  Defining your architecture is not only something that you want to get clear up front, but that you want to make sure is something that you understand and agree with ... deeply.  Don't take any whitepapers, published examples, or consultant's proclamations as gospel.  Question everything until you have a right answer that works for you.  If this is an area which is new to the people you have on staff, no matter how long they have been writing in Progress, seriously consider getting a mentor to help guide you through the process.  But, don't just take the first mentor that comes to light; interview all you can find and think critically about what they tell you.  There are a number of us who think the best guidance on architecture comes from someone other than the primary vendor of the tools, if you know what I mean.

Second, you might want to think from very early on whether your target is going to be an OO version of OERA or a non-OO version.  The principles don't change, but there is a shift in vocabulary and it might be better to sort out and start using a consistent vocabulary from an early stage.  I would encorage thinking about OO because I think there is a notable synergy among OO, OERA, and SOA.  You don't have to go that way, but there are some good reasons to consider it.

Next, you need to realize that not every layer participates in every activity.  Layers are roles and if a role doesn't apply in a particular activity, then it either doesn't appear or is reduced to a facade.  If you are merely fetching data for a report, there is no business logic involved unless there is some kind of processing involved in transforming the data as returned from the DA layer.   If not, there there isn't really any BL involved here.

And, you need to recognize that there are some very different styles and you are going to have to make choices about which styles you use.

For example, there are some people who would only ever design a DA layer with very "atomic" components while there are others who would say that it was perfectly appropriate to create compound DA componets when those components correspond to meaningful business objects in the application.  Thus, one person would say that one should only have Customer and Order DA objects and would require a composition object to create a combination of the two.  But, another designer would say that various summary data about customers was either something which the Customer DA object should know about or that there could be something like a CustomerSummaryData DA object.  There is a lot of art in desgn here since one wants to optimize non-redundant code, but at the same time make things efficient.  But, in either case, the dataset should be constructed in the DA layer.

As for the role of PDS, that is another are of debate and style.  There is one school of thought, typified by PSC whitepapers, which uses PDS as a "data object" created by the DA layer and passed to the BL layer.  But, there is another school of thought, (http://www.oehive.org/OERAStrategies being an early example) which feels that complex data sets like this should not be passed around since that means that both sides need to definitioin and thus encapsulation is broken.  Instead, the PDS should be wrapped in an entity object or collection object and that should be what is passed.

Finally, on the interface to Sonic, PSC has been doing a lot of cool stuff to make it easy to push a PDS at an interface and have it get automatically be converted.  If that does the job for you, then easy, peasy, but remember that serializing a dataset to XML is also very easy, so if you need something you aren't getting from the automatic routines, just do it yourself.

Oh, and while this isn't central to your question, there is a hint in the way you asked it that you may want to do some fresh thinking about the role of AppServer in all of this.  AppServer is really a very specific way to connect clients and server-side logic.  It has has some very nice things done with it over the years to allow for non-ABL clients and the like, but it also has some very specific ways of working which aren't necessarily what one wants to do in all places in a modern N-tier architecture.  In particular, a Sonic service isn't necessarily a client which is going to be accessing a remote database via AppServer.

Posted by ojfoggin on 06-Aug-2009 17:37

Thanks both!

I've watched the webinars part 1 and 2 that you mentioned (thank you) and I now have a much much much better understanding of how to use PDSs.  Also the whole concept of separating the Data Access with the Business Logic is clearer although I will have to try some stuff tomorrow.  (I'm at home at the moment).

Thomas, I completely understand when you say for a report there is no business logic.  The data is only being spat out in some form or other so that makes sense.  As for writing (i.e. changing a customer order) I presume you would make all your changes to the temp-tables in the gui (i.e what items you want etc...) then the BL side would update some log fields or something (struggling to find things here) before sending the PDS back to the DA layer to run the update.  At which point the DA will either run the updates or shout back because of locking issues etc...

ProDataSets should be page 1 of the 4GL essentials.  I can see so many uses for them already in the project I am currently working on.  Even without going down the SOA / OERA route.  Especially when I have sometimes 10+ temp tables being passed from GUI to AppServer.

I presume though that there is some overhead related to using a PDS as opposed to a temp table?

For example if a particular temp-table used up 5kb of memory space and I put it into a PDS with a before table and tracked changes of some fields.  I am guessing the PDS would be considerably larger?

Thanks for your help.

Posted by Thomas Mercer-Hursh on 06-Aug-2009 18:21

As for writing (i.e. changing a customer order) I presume you would make all your changes to the temp-tables in the gui

First question, of course, is what is the UI.. If you are doing some architecture redesign, one of the things I would certainly strongly recommend is that you look at getting a really clean separation between UI and the rest so that you can use different UIs in the future without having to rewrite the server side logic.  This is particularly apt if you are looking at Sonic since Sonic acts like a UI layer with no UI, if you get what I mean.  You hand things off to it and it gives you things and from the BL perspective that is all the same stuff.

So, think in terms of the UI putting together an order or modifying an existing order and doing so largely disconnected from the server side.  Then, it gets all done and looks to commit and something goes off to the server.  I say something, because I would design in such a way that it could be XML in a Sonic message, a PDS, JSON to PDS, whatever.  Then the BL gets this proposed order and it has to do a lot more than just shove it to the DA for persistence.  Usually, there are all kinds of internal checks and checks with other subsystems like stock availability, credit, customer status, etc.  Then, when whatever has been done and everything is OK, it can be persisted, but if there is a problem, it may need to go back to the originator ... "your last check bounced", "we are out of item X and don't expect more for 3 weeks" etc.  And, of course, one probably wants positive confirmation to go back as well, at least most of the time.

PDS are cool, no question about it, but don't start seeing a hammer as the solution to a screwdriver problem.  There are still a number of places where a plain old TT is just the ticket and any more is excess.

I haven't seen any data on relative efficiency, but PSC has always been very good about compacting things tightly so that no excess is being wasted.  The expensive thing, as Tim discovered, is defining a lot of TTs that you don't actually use and simultaneous forgetting that the default block size on disk is now four times bigger.

Posted by Admin on 06-Aug-2009 23:30

tamhas schrieb:

PDS are cool, no question about it, but don't start seeing a hammer as the solution to a screwdriver problem.  There are still a number of places where a plain old TT is just the ticket and any more is excess.

I'd use a different sample as a ProDataset contained temp-table still has all the features of a stand-alone temp-table (so it's not like hammer or screwdriver). The ProDataset does not limit the temp-table in any way. It's a container for temp-tables and adds a lot of (potentially useful) features. The ProDataset is like a large toolbox, containing many screewdrivers, hammers and many other tools. You might just need the tiny screewdriver now, but bringing the whole toolbox won't hurt. A professional mechanic will always cary the toolbox to be prepared for the unexpected.

For the mean of generlization I'd always use ProDatasets (or a DATASET-HANDLE) as the Interface. You might have to add additional data (joined or unrelated temp-tables etc.).

I doubt there is a lot of overhead with database as well. The before-table will only get populated when you start updating data - and read-only tables (well tables that are read-only today), won't have a before-table at all. It's a very efficient and decoupled way of tracking those changes. Some (external) interfaces might not support that (tracking-changes) properly, but then it's a task for the service interface procedures (usually procedures as classes can't be called externally) to transform an appropriate data-structure to a dataset and back.

Posted by ojfoggin on 07-Aug-2009 02:00

I was thinking of something else RE PDS etc...

Say for example we have a very common Data Access of "Get Order" and this returns a PDS with the order header, order lines and order sublines (serial numbers etc...).

If I now want to display an order but also some unrelated table such as the stock levels of a particular item.

Now I need to get the order (I can reuse my "Get Order" DA code) but also the details about the items.  Do I right a whole new code to return this in one PDS?  Do I return the order PDS and a temp table with the item details or do I get the order PDS and get an item stock levels PDS from some other DA code and merge the two together into one PDS?  (Is that possible).

The whole concept is brand new to me (and my colleagues) so they may seem like very basic questions but I think we need to sort these out before trying to use them.

Thanks again!

I'm off to work to try some of these out now.

Posted by davidkerkhofs3 on 07-Aug-2009 03:22

That's the beauty of ProDatasets, you can detach/attach context-specific information without changing the in & out parameters.

Of course, from a design perspective I would say everything comes down to how rigid you are in following standards you choose/specify.

At a certain point in time some feature within your app or even within the ABL will force you to either create something less useful but compliant with your design standards or to go with the flow...

Design is hard and redesign even harder. I prefer keeping strictly to some strong but easy principles but don't tie yourself to a very nice theory.

Also use diagrams to visualize your ideas on architecture and discuss on them with your colleagues, who know the apps you need to create.

Posted by Admin on 07-Aug-2009 07:46

ojfoggin schrieb:

Now I need to get the order (I can reuse my "Get Order" DA code) but also the details about the items.  Do I right a whole new code to return this in one PDS?  Do I return the order PDS and a temp table with the item details or do I get the order PDS and get an item stock levels PDS from some other DA code and merge the two together into one PDS?  (Is that possible).

This could probably be done using a Dynamic prodataset (DATASET-HANDLE). The difficulty might be that a single temp-table can only be member of a single dataset at a time. So at one point in time you'd end up with creating a copy of the temp-tables (including data) and that may be inefficient.

You might use an array of variable size to return datasets in the current session. Then you wouldn't have to merge two different datasets into a single one which might also be problematic when working with changed datasets. The service interface needs to translate that into a list of DEFINE OUTPUT PARAMETER DATASET-HANDLE hDataset1 .... DEFINE OUTPUT PARAMETER DATASET-HANDLE hDatasetN for transport from AppServer to client (well, when you are using the AppServer in the traditional way).

The ADM2 serviceadapter for the Dataview (adm2 client representation of a business entity) is written in such a way. And for those that prefer non ABL clients, the array members could be merged in a single XML structure using appropriate root nodes.

Posted by Peter Judge on 07-Aug-2009 08:15

This could probably be done using a Dynamic prodataset (DATASET-HANDLE). The difficulty might be that a single temp-table can only be member of a single dataset at a time. So at one point in time you'd end up with creating a copy of the temp-tables (including data) and that may be inefficient.

A single temp-table can be a member of many PDS' if you create separate, named buffers (and set the buffer's AUTO-DELETE property to false). This is really nice for binding a PDS to more than one BindingSource (if you're using Gui for .Net), or for re-using cached data in multiple business entities (say).

/** Since a ProDataset can only be navigated by a single
    ProBindingSource at a time, we need a facility to clone
    the dataset for the UI.
**/
define private temp-table ttDatasetClone no-undo
  field DatasetHandle as handle
  index idx1 as primary unique DatasetHandle.

method public handle CloneDataset(phDataset as handle):

  define variable iLoop as integer no-undo.
  define variable hBuffer as handle no-undo.
  define variable hRelation as handle no-undo.
               
  define buffer lbDataset for ttDatasetClone.
               
  create lbDataset.
  create dataset lbDataset.DatasetHandle.
       
  do iLoop = 1 to phDataset:num-buffers:
    create buffer hBuffer for
      table phDataset:get-buffer-handle(iLoop)
      buffer-name phDataset:get-buffer-handle(iLoop):name.
           
    /* We don't want the buffers deleted when the cloned dataset is cleaned up. */
    lbDataset.DatasetHandle:add-buffer(hBuffer).
           
    /* Set the Auto-Delete to False since we're re-using the buffers from the
       master dataset, and we don't want those to get blown away willy-nilly
       when the cloned dataset is cleaned up.
                                                                          
       We MUST do thas /after/ the buffer's been added to the ProDataSet else
       it has no effect. */
    hBuffer:auto-delete = false.
  end.
               
  do iLoop = 1 to phDataset:num-relations:
    hRelation = phDataset:get-relation(iLoop).
           
    lbDataset.DatasetHandle:add-relation(
      lbDataset.DatasetHandle:get-buffer-handle(hRelation:parent-buffer:name),
      lbDataset.DatasetHandle:get-buffer-handle(hRelation:child-buffer:name),
      hRelation:relation-fields,
      hRelation:reposition,
      hRelation:nested,
      hRelation:active,
      hRelation:recursive,
      hRelation:foreign-key-hidden ).
  end.

  return lbDataset.DatasetHandle.
end method.

-- peter

Posted by ojfoggin on 07-Aug-2009 08:20

Sorry to go off topic but is thee any way to get rid of the banner down the right hand side of the site?

I turns the posts in to unredable columns of text.

T

h

e

y

a

l

l

s

t

a

r

t

t

o

l

o

o

k

l

i

k

e

t

h

i

s

.

Posted by Admin on 07-Aug-2009 08:21

pjudge schrieb:

A single temp-table can be a member of many PDS' if you create separate, named buffers (and set the buffer's AUTO-DELETE property to false). This is really nice for binding a PDS to more than one BindingSource (if you're using Gui for .Net), or for re-using cached data in multiple business entities (say).

Cool. Will that result in different buffers for the BEFORE-TABLEs as well?

Posted by ojfoggin on 07-Aug-2009 08:45

Thanks for all the help with this!

I have another question about PDSs.

We have a table that stores the header information of different documents (i.e. orders, despatches, invoices, etc...).  For invoices there is a table related to the header that contains value, VAT, tax code, etc... information.

The 2 tables are linked by document type (order, despatch, invoice, etc...) and document number.

The current procedure for getting all of this information goes thus...

get the headers for a particular order.

put them into a temp-table.

For each header that is an invoice type find the invoice relating to it.

Put them into another temp-table.

Return 2 temp tables.

If I set up a dataset to contain these 2 temp tables, how am I best populating it?

I can't do it in one query as if there is no invoice then the header will be ignored.  I can query the header table on it's own and that works but if I try to define the invoice table as part of the PDS and then populate the header it shouts at me for not doing anything with the invoice table.

I think I'll go peruse the PDS documentation now that I know a bit more about them.

Thanks for any tips though!

Posted by Admin on 07-Aug-2009 08:50

(The whole concept is brand new to me (and my colleagues) so they may seem like very basic questions but I think we need to sort these out before trying to use them.)

Consider a scenario where a PDS contains sale, saleDetail, and saleTender temp-tables. SaleDetail at the UI can be a serialized component or not. You probably want to check availability / status as the order is entered. In my application I have a PDS (dsSerial) that returns all the information I need to construct a proper saleDetail temp-table record. The user enters a serial number and I go to the appserver and fill the dataset with one record. Some important information might be the currentSerialStatus (sold, InStock, Reserved, etc), Manuf, Model, cost, price, IsNew, Trade Status, etc. I also use dsSerial in other areas of the application (acquisitions of serialized inventory, reports, etc.) where the fill will, in some cases, return several thousand records.

The user entering the sale can clearly see the status of the serialized item. If it is "sold" and the user creates the sale anyway (save and print) the business layer will catch this and return a message (such and such is already sold). If all is good then the BL allows the creation of sale, saleDetail, saleTender, and serialTrx rows through the DA layer.

The DA layer (appserver side) for dsSerial is extensive. Several possible constructions of the query-prepare statement exist, with top-level tables being customer, manuf, itemInventory, Manuf, Model etc.).

If the item being sold is not serialized I use a different PDS (dsItemInventory). This PDS is also used extensively throughout the application.

Posted by ojfoggin on 07-Aug-2009 08:54

Thanks Daniel,

That's helped.  I think the next step is for the whole dev team to sit down and discuss how to take this forward.

Look at the data needed and how to go about getting it etc...

Posted by ojfoggin on 07-Aug-2009 09:38

I've just had a revelation.

I've been thinking in terms of how to pass data that maps 1:1 to the database, in a PDS back to the Business Logic layer.

I need to be thinking in terms of getting data that is in a sensible format that the BL layer can interpret.

This makes it a lot easier to think about now.

Posted by Admin on 07-Aug-2009 10:03

Notice the eSerial temp-table has many derived columns from several tables. A fill will just buffer-copy these columns from the joined tables if the column names match.

DEFINE TEMP-TABLE eSerial {&referenceOnly} BEFORE-TABLE eSerialBefore

/* serial */

   FIELD serialID AS CHARACTER LABEL "Serial ID" FORMAT "X(64)"

   FIELD serialNumber AS CHARACTER LABEL "Serial Number" FORMAT "x(12)"

   FIELD fireArmID AS CHARACTER LABEL "FireArmID" FORMAT "X(64)"

/* itemInventory */

   FIELD itemInventoryID AS CHARACTER LABEL "itemInventoryID" FORMAT "X(64)"

   FIELD itemInventoryNumber AS CHARACTER LABEL "Item Number" FORMAT "x(30)"

   FIELD itemCategoryID AS CHARACTER LABEL "Item CategoryID" FORMAT "X(64)"

   FIELD itemBrandID AS CHARACTER LABEL "Brand ID" FORMAT "X(64)"

   FIELD itemInventoryRetailPrice AS DECIMAL LABEL "Retail Price" FORMAT "->>,>>9.99"

   FIELD itemInventorySellPrice AS DECIMAL LABEL "Sell Price" FORMAT "->>,>>9.99"

/* derived from serialTrx, acquisition and saleDetail */

/* "In Stock" when (serialTrx --> acquisitionDetail)

   (C)omplete, (H)istory, (Q)uoted, (S)topped, (A)allocated, (P)lanned (if serialTrx --> saleDetail) */

   FIELD currentSerialStatus AS CHARACTER LABEL "Status" FORMAT "x(10)"

   FIELD saleF4473 AS INTEGER LABEL "f4473" FORMAT "99999"

   FIELD saleDetailDate AS DATETIME LABEL "Detail Date" FORMAT "99/99/99"

   FIELD saleDetailSalesPerson AS CHAR LABEL "Sales Person" FORMAT "x(20)"

   FIELD saleDetailContact AS CHARACTER LABEL "Contact" FORMAT "x(30)"

   FIELD saleDetailApproval AS CHARACTER LABEL "Approval Number" FORMAT "x(20)" INITIAL "?"

   FIELD saleDetailPrice AS DECIMAL LABEL "Price" FORMAT "->>,>>9.99"

   FIELD saleDetailIsTrade AS LOGICAL LABEL "Is Trade" FORMAT "yes/no" INITIAL "no"

   FIELD saleDetailTradeAmt AS DECIMAL LABEL "Trade" FORMAT "->>,>>9.99"

   FIELD acquisitionDetailDate AS DATETIME LABEL "acquisition Date" FORMAT "99/99/99" INITIAL "?"

   FIELD acquisitionDetailProductIsNew AS LOGICAL LABEL "Is New" FORMAT "yes/no" INITIAL "yes"

   FIELD acquisitionDetailContact AS CHARACTER LABEL "Contact" FORMAT "x(30)"

   FIELD acquisitionDetailApproval AS CHARACTER LABEL "Approval Number" FORMAT "x(20)" INITIAL "?"

   FIELD acquisitionDetailCost AS DECIMAL LABEL "Cost" FORMAT "->>,>>9.99"

/* derived from manuf, manufModel, fireArm, fireArmBarrel, etc. */  

   FIELD manufName AS CHARACTER LABEL "Manuf. Name" FORMAT "x(45)"

   FIELD manufModelName AS CHARACTER LABEL "Model Name" FORMAT "x(12)"

   FIELD fireArmBarrelDesc AS CHARACTER LABEL "Barrel Desc" FORMAT "x(8)"

   FIELD fireArmCaliberDesc AS CHARACTER LABEL "Caliber Desc" FORMAT "x(8)"

   FIELD fireArmTypeDesc AS CHARACTER LABEL "Name" FORMAT "x(8)"

   FIELD fireArmFinishDesc AS CHARACTER LABEL "Finish Desc." FORMAT "x(30)"

   FIELD fireArmProductCode AS CHARACTER LABEL "Product Code" FORMAT "x(30)"

   INDEX idxAcquisitionDetailContact IS WORD-INDEX

       acquisitionDetailContact

   INDEX idxSaleDetailContact IS WORD-INDEX

       saleDetailContact

   .

Posted by Mike Ormerod on 07-Aug-2009 10:32

Absolutely.  One major benefit of a PDS is you can create a 'de-normalized' form of your data that is easier for your Business Logic to work with.  For example it's very common for Customer info to be spread across multiple physical DB tables, Contact, Address etc.  Whereas using a PDS you can have a single view of Customer that combines the data from the multiple physical tables.  All your business logic then deals with this 'virtual' view of Customer, and it's up to the DA layer components to then handle the mapping back of the virtual customer to the real tables for updates etc.

Posted by ojfoggin on 07-Aug-2009 10:50

Woot!

Thanks for all the help!

I have now created my first DataAccess program that could potentially be used to pass a usefull PDS back to the BL layer.

I love the way fill works!  It took me a while to get to grips with attaching one temp table to a query but the other to the DB table but once I worked that out ... whoosh! ... perfectly populated ProDataSet!  (there's an alliteration that I bet has never been used before )

The whole thing of querying one table and getting back related data from n number of tables is amazing!

Now, let's see if I can get the next bit working...

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:28

You might just need the tiny screewdriver now, but bringing the whole toolbox won't hurt. A professional mechanic will always cary the toolbox to be prepared for the unexpected.

Analogies can get tortuous.  Clearly, the PDS should be in one's toolset, but I see no reason to make everything a PDS when the TT covers the requirement.  Changing it to a PDS later isn't a big deal if requirements change, but there are many cases where it is pretty clear that the usage is not likely to change.  E.g., a table of country codes or some such which is a local cache to avoid excessive DB lookups is read-only and unlikely to benefit from any PDS feature.

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:36

You could solve this problem a number of ways and different people would solve it differently.  My sense, though, is that the stock level of an item isn't really a property of the order ... the only issue related to the order is whether there is sufficient stock available to ship the order.  And, that can be a complex issue since the desired ship date might be three weeks out and the stock which one expects to use to ship it might be arriving in two weeks.  So, I wouldn't put stock levels into the order at all, but rather would have a separate object which was a cache window onto the database to contain the information about the items.

One way to look at this is to think in terms of your future SOA on ESB evolved system where there might be an Order service on one machine and database and an Inventory service on another machine and database.  When you allocate stock to the order, you are going to send a message to the inventory service requesting that allocation ... you aren't going to tell the inventory service about all the details of the order.  And, you are going to tell the order that it is or is not allocated, but not whether there was 1 or 10,000 additional items available.

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:40

This could probably be done using a Dynamic prodataset (DATASET-HANDLE).

"Could" and "should" aren't necessarily the same thing.  Whether or not one is actually using OO constructs, think in terms of an Order object since that kind of encapsulation is a good thing even with procedural code.  Do you really want an Order object which may or may not have stock level data attached?

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:42

A single temp-table can be a member of many PDS' if you create separate, named buffers (and set the buffer's AUTO-DELETE property to false). This is really nice for binding a PDS to more than one BindingSource (if you're using Gui for .Net), or for re-using cached data in multiple business entities

Horrors!  Certainly not something one would ever do in any good OOA/D process.  Create an association between one object and multiple others, but make it actually a part of multiple others????

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:47

Join in the suggestion box on the PSDN Feedback forum ... but don't expect quick action ...

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:53

Whether or not you are going to use OO constructs, I think it is useful to think of this kind of issue in OO terms, regardless of what data structure you might have to persist the data.  One of the whole points of a DA layer is to make the actual persistence form invisible relative to the way that it is used.

So, my first question here is what is the business purpose of a collection of headers of mixed type?  Is that something you are actually going to use in the application?  If not, then assembling a collection of purchase orders or assembling a collection of invoices could easily have different components, regardless of the fact that the headers are stored in the same table.  If yes, then I would need to understand what meaning such a collection had.

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:56

That description sounds a lot like a traditional ABL client in a client/server context, i.e., line by line and field by field interaction with the database.  This isn't the way one tends to design applications in an OERA context.

Posted by Thomas Mercer-Hursh on 07-Aug-2009 12:58

I've just had a revelation.

BASIC REVELATION!  Stored form and use form may or may not have anything to do with one another, especially if the stored form is something evolved over years and years of prior modifications.

Posted by Admin on 07-Aug-2009 13:45

tamhas schrieb:

You might just need the tiny screewdriver now, but bringing the whole toolbox won't hurt. A professional mechanic will always cary the toolbox to be prepared for the unexpected.

Analogies can get tortuous.

IMHO it's closer to the ABL world to compare a dataset and temp-table with a screwdriver and something that contains a screwdriver but does not reduce the functionality of the screwdriver - like a toolbox.

Changing it to a PDS later isn't a big deal if requirements change

That depends on the fact if you are using the datasets/temp-table in the interface or completely encapsulate them. There are many people that do pass them around (like myself) and feel pretty good and yet future-proof with it.

Nachricht geändert durch Mike Fechner

Posted by Thomas Mercer-Hursh on 07-Aug-2009 14:04

IMHO it's closer to the ABL world to compare a dataset and temp-table with a screwdriver and something that contains a screwdriver but does not reduce the functionality of the screwdriver - like a toolbox.

Like I said, analogies can be as confusing as they are useful.

Point being, for me, that if it is more complicated than is needed for the purpose, then it isn't the right tool.

Posted by Admin on 07-Aug-2009 14:23

The application is Point of Sale. A customer arrives at one of the many registers with misc. inventory goods off the floor or requests the purchase of an item that is serialized (pistol, shotgun, rifle). The sales person scans the item or enters the serial number. At this point the sales person needs feedback to compare the item in hand with the description recorded in the db. A quick request to the appserver for details is the only way to do this. This is the only activity during the sale process that needs to access real time information. The BE handles any possible problems (permits expired, approval numbers not entered, etc.) with messages, and the UI allows modification and retry.

This procedure has to be quick. Every key stroke is expensive and any delays are a disservice to everyone involved. If one were to batch the sales requests without immediate feedback there would be many expensive errors, selling the wrong firearm, on paper, being the worst. 

This thread is closed