I've had some discussions recently about some places where work-tables might actually be useful because they are lighter weight than temp-tables with their minimum 9KB footprint for an empty TT. WT could definitely have a use if they got support for Progress.Lang.Object fields (see enhancement request 4049) as a lightweight collection implementation.
But, in talking about it, one of us remembers some ugly issues about the WT implementation ... other than obvious things like lack of indices and such ... having to do with a WT entry being 64bits and 48 of that taken up with address or something. I'm not sure how that works since obviously WT records could be more than 16 bits. Does anyone remember these details and what implications they have?
FYI: The keywork help states:-
"Work table records are built in 64-byte sections. Approximately the first 60 bytes of each record are taken up by record specification information (or a record header). That is, if a record is 14 bytes long, it will be stored in two 64-byte sections, using the first 60 bytes as a record header. If the record is 80 bytes long, it will fit into three 64-byte sections. The first part contains 60 bytes of header information plus the first 4 bytes of the record. The second section contains 64 bytes of the record. And the last section contains the remaining record bytes."
Sounds like they were build for the 1990's :-)
I recall doing some observation of work table memory usage many moons ago. According to the dicumentation every time -l (lower case ell) clicked upwards (-l is a "soft" limit that can increase beyond the default if need be -- these increases show up in the .lg file so it is fairly easy to see this and test it) it was supposed to be 1k of RAM. But in fact 4k was allocated and only 1k actually used. At least that was what it looked like from the "glance" POV. I reported it. SFAIK nothing ever got changed. I stopped using workfiles.
"dicumentation"
Freudian slip?
I don't suppose you ever got a bug number?
How did you know that only 1K was being used? From the description, with small records, a lot more than the space for the data would be required, but, by the same token, for small amounts of data it is still a lot less than a TT.
It was a very long time ago... I probably had one once but it has long since been lost.
I was not comparing the size of data added to the RAM usage but rather the size of -l, which is supposedly in 1k units, to the incremental memory usage; which was clearly incrementing in 4k chunks rather than 1k. Combine the two problems and you get a really big problem. But I no longer ever use work files so the current state of affairs is unknown to me (although I would guess that nothing has been touched in that area for a very long time).
What were you using to look at incremental memory usage?
"glance"
It was an HPUX system.