Morning all.
I have a requirement to display a date as "01 April 2015". Now I know I could do this long-hand in ABL, but I also gather I can use the .Net controls to do this, particularly toString("dd MMMM yyyy") or some such. I'm just wondering how to leverage this functionality within ABL? Here's the article I'm looking at at M$.
http://msdn.microsoft.com/en-us/library/8kb3ddd4(v=vs.110).aspx
Progress 11.2.1 on Windows.
Morning all.
I have a requirement to display a date as "01 April 2015". Now I know I could do this long-hand in ABL, but I also gather I can use the .Net controls to do this, particularly toString("dd MMMM yyyy") or some such. I'm just wondering how to leverage this functionality within ABL? Here's the article I'm looking at at M$.
http://msdn.microsoft.com/en-us/library/8kb3ddd4(v=vs.110).aspx
Progress 11.2.1 on Windows.
Flag this post as spam/abuse.
Thanks Mike. Much appreciated. :)
A colleague has been playing with this and has noticed that the memory usage really increases if you do this multiple times. Will Garbage Collection clean up the memory eventually, or do I need to rework this to clear up the objects myself?
Why would you not clean up the objects yourself? :)
The code Mike posted doesn't clean up after itself.
Sorry - hit reply too soon.
Mike's code doesn't clean up after itself and to make sure you clean up makes the code a lot longer to implement in that you have to define variables to hold the objects for cleanup later on.
My point being, why depend on some magic happening in the background ... magic which the next guy might not understand ... when it is possible to make the lifecycle clear and explicit?
Hmm.
Tested this variant on the code:
DEFINE VARIABLE dtDatum AS DATE INIT TODAY .
etime(true).
REPEAT:
CAST (BOX (dtDatum), System.DateTime):ToString("dd MMMM yyyy":U).
IF etime > 60000 THEN leave.
END.
And initially memory grows and then starts fluctuating between 20500K and 20800K
If anyone's taking bets, my money (or beer) is on the fact that there's .NET objects being instantiated, so memory use is governed by the somewhat unpredictable .NET garbage collector.
Calling the Garbage Collector in a managed-memory environment "magic that the next guy might not understand" is a little strong.
If the next guy doesn't understand the basics of GC then they probably need to read up on it before writing/maintaining OO code.
Writing manual object deletion code is almost always **bad** idea.
The GC will only not delete an objects if something holds a reference to it.
If something holds a reference to an object, then it is able to try & use that reference.
If the object behind the reference has been manually deleted then you will get invalid-handle errors.
I have to disagree, Andrew. The developer knows the lifecycle. In most cases, that means the developer knows when the object needs to be created and when it is no longer needed. Whenever that it true, best practice is for the developer to express that understanding explicitly by managing the deletion. This avoids, among other things, future developers deciding that they can make use of something which may disappear on them. It also handles those situations which current garbage collection doesn't handle like circular references or event subscriptions. The old general rule of "if you create it, you delete it" is still a good one. There are cases where this is not possible, e.g., objects which serve as messages, but then it is clear that the receiver reaches a point where the message has been received and the message object can be deleted. Much cleaner, clearer, and safer.
BTW, one of the points here is that people often *don't* understand the limits of current garbage collection and the result is memory leaks.
I will admit that the above remarks are primarily about ABL objects since I have limited experience with .NET objects and don't know what the limitation might be there, but if I am using the .NET object from ABL, then I presume that the ABL understands the lifecycle.
So here's a question. I can assign the result of the BOX to a variable and clear that up after I've used it. I can't find a way though to capture the result of the CAST so I can clear that up, and if I comment out the CAST bit (so just use a BOX) the memory increase is minimal anyway.
Well, I have to disagree, Thomas. From an encapsulation point of view it's a nono to make any assumptions what a callee does with a passed object (reference). So if A passes an object to B, A cannot (and should not) make assumptions on what B does with the passed object and therefor it's not save to delete the object manually in A. Let the GC do its work.
So here's a question. I can assign the result of the BOX to a variable and clear that up after I've used it. I can't find a way though to capture the result of the CAST so I can clear that up, and if I comment out the CAST bit (so just use a BOX) the memory increase is minimal anyway.
Flag this post as spam/abuse.
Makes sense, thanks Mike, but why does the CAST increase memory usage if I do it in a loop, but the BOX doesn't so much?
Makes sense, thanks Mike, but why does the CAST increase memory usage if I do it in a loop, but the BOX doesn't so much?
Flag this post as spam/abuse.
Ok thanks Mike - will have a play :)
Consider the following code:
DEFINE VARIABLE dtDatum AS DATE INIT TODAY . DEF VAR oBoxDate AS CLASS Progress.Lang.Object. DEFINE VARIABLE lv-i AS INTEGER NO-UNDO. DEFINE VARIABLE lv-j AS INTEGER NO-UNDO. ASSIGN lv-j = 10000. REPEAT lv-i = 1 TO lv-j: assign oBoxDate = BOX(dtDatum). CAST (oBoxDate , System.DateTime):ToString("dd MMMM yyyy":U). if valid-object(oBoxDate) then delete object oBoxDate . END.
If I run this in the procedure editor interestingly the memory usage is no different with or without the delete object. In fact the memory increases (and increases more on the first run), but basically stays the same after that for each subsequent run. It's as if the memory stays allocated to the process but is marked as unused so the process can reuse. So does that mean the delete object is a waste of effort?
With a GC, the developer manages the lifecycle of an object by controlling the object reference's scope.
If they don't control the reference scope, then deleting the object manually will just lead to invalid handles in hard to track down locations.
This "I don't trust the GC " worry was one that a lot of C programmers had when they started Java/C++ ... that debate is long since settled in favour of trusting the GC - a google session should find a lot of articles.
If I came across a load of OO code that didn't trust the GC & deleted objects manually, I would run & hide (or just double the estimates of time required to support it).
(It would have the same code-smell to me as seeing masses of RELEASE statements in ABL).
@bronco, When I say "message", I mean a context in which the purpose of the object, probably one that is all properties and no behavior, is to transmit information from A to B. In that context, A knows that its interest in the object is done when it is transmitted and B knows that its interest is done when it has read the message. If B does not delete it, the message object is likely to live until B dies or a new message is received.
@Thomas ... For A to to tell the GC "I don't care about object xyz any more", all it has to do is to do one of the folllowing:
If A gets rid of its reference when it doesn't care about xyz any more & B does the same then the GC will work as desired.
@andrew, For starters, we know that the existing GC does have flaws, notably circular references and subscriptions. While these exceptions are now known, they were a surprise to those who discovered them and are likely to continue to surprise people until such time as PSC figures out more sophisticated GC. But, more to the point, this issue seems to me to directly parallel that of buffer or transaction scope. Yes, it is necessary and appropriate to control reference scope and, if one does, then it is a simple matter to delete the object when that scope ends. Anyone caught with an invalid handle is not paying attention to the scope and, moreover, is just as likely to be caught with an invalid handle, perhaps more mysteriously since GC may or may not have deleted the object by the time that one tries to inappropriately reuse it.
To me, there is a direct parallel here to transaction scope, for much the same reason. Yes, it is nice that ABL does the right thing in many cases without one having to make explicit statements, but it is easy slip up and have it not be doing the right thing. By being explicit, one makes intent clear.
I understand that it is possible to get GC to work ... what I question is always relying on it. At least for me, if A is a message object, then it is created in the transmit method and will naturally go out of scope at the end of that method when the transmit is done. In B, if the object is read and one is done with it, what is the difference between assigning null and deleting?
I understand that it is possible to get GC to work ... what I question is always relying on it. At least for me, if A is a message object, then it is created in the transmit method and will naturally go out of scope at the end of that method when the transmit is done. In B, if the object is read and one is done with it, what is the difference between assigning null and deleting?
Flag this post as spam/abuse.
The old general rule of "if you create it, you delete it" is still a good one.
BTW, one of the points here is that people often *don't* understand the limits of current garbage collection and the result is memory leaks.
I have to disagree, Andrew. The developer knows the lifecycle. In most cases, that means the developer knows when the object needs to be created and when it is no longer needed. Whenever that it true, best practice is for the developer to express that understanding explicitly by managing the deletion. This avoids, among other things, future developers deciding that they can make use of something which may disappear on them. It also handles those situations which current garbage collection doesn't handle like circular references or event subscriptions. The old general rule of "if you create it, you delete it" is still a good one. There are cases where this is not possible, e.g., objects which serve as messages, but then it is clear that the receiver reaches a point where the message has been received and the message object can be deleted. Much cleaner, clearer, and safer.
BTW, one of the points here is that people often *don't* understand the limits of current garbage collection and the result is memory leaks.
I will admit that the above remarks are primarily about ABL objects since I have limited experience with .NET objects and don't know what the limitation might be there, but if I am using the .NET object from ABL, then I presume that the ABL understands the lifecycle.
Flag this post as spam/abuse.
Mike, I was referring to
ASSIGN myObjectXyzReference = ?.
Peter, you first point suggests that developers should not care about lifecycle. Are you really saying that?
The time to type the delete statement is trivial. What takes time is thinking about appropriate scope. You have to think about appropriate scope for GC to work or one is just going to have dangling references accumulating.
The issue is not bug in GC, per se, but design flaws. There is nothing in the design of the current GC to notice that the only remaining reference is circular.
Although this discussion is going a bit off topic to the original post, I too have to disagree with Thomas. Knowing the limitations of the GC (circular references and event subscription), you should favor setting one of the references to ? (and have the GC see the only left reference that can go out of scope), or unsubscribing from events (which is equally necessary in C# as that is a strong reference there too).
In fact we've even seen bugs when explicitly deleting objects (DELETE OBJECT), where the Progress side of things were gone, but the .net side leaked... (.NET doesn't have an explicit delete, you can only call the GC to run, but then again, when references remain they will prevent deletion).
Yes, I said that. Generally, all I - as developer - care about is using the object now (in the current context) for as long as I know I need it. Once I'm done with it - callling a method or updating data or whatever - I let it go (just like Kahlil Gibran says). It's not my responsibility to deal with, or even know, the object's lifecycle. I get an object from somewhere, I do something to it, and pass it on. What happens before or after is beyond my ken.
Of course, if I as developer am writing the code that is responsible for managing lifecycles, then clearly I need to think about this more. But that's an exceptional event, to me.
-- peter
Default gc state is "things will be removed unless you do something"
Default non-gc state is "things will live forever unless you do something"
I prefer dealing (writing code for) with exceptional states, not standard states.
-- peter
my view, only because I have a cleaning lady (might well be a guy btw)
coming over once a week doesn't means will trash everything on the floor
all the time waiting for the gc
so yeah, don't want to argue with you guys but no, I no not feel bad at
all for cleaning up myself
the gc was not available on first versions were we had OO, not complete
even now and some things will never be fixed (Java still have issues
with it and guess what the one to blame are the developer that keeps
references... mostly on collections)
if you (PSC) think gc should rule the world then it's a easy thing to
do... just pipe the 'delete object' to /dev/null and we're all going to
be happy, no one will change existing code to remove the clean-up code
just because gc was introduced.
Peter, I see your point, but I question the point of view. Some objects, yes, want to use an object and have no knowledge where it came from or where it goes to. This is very appropriate. But, such an object is not managing the lifecycle of said object. But, somewhere, someone is. And, it is that object which knows. If we have F the object which creates the object and O the object in question and C the consumer of that object, it is part of the design as to whether O should live after C is done or not. If it should live, then one needs to provide a reference so that it will, whatever form that will take. If it shouldn't, then one doesn't. Since this is part of the basic design, it seems better to be explicit about it than to let it mysteriously happen in the background.