-t parameter

Posted by Rob Fitzpatrick on 28-Feb-2012 12:16

By default, client temp files in *nix are unlinked.  I am curious about the reason(s) for this, and whether the conditions that justified this design decision still exist today.

In other words, there is an obvious downside to not using -t: I can't see my temp files.  Is there an upside, e.g. temp file I/O performance, and does it still outweigh the downside on today's hardware and file systems?

All Replies

Posted by Tim Kuehn on 28-Feb-2012 12:32

unlinked files don't need to be cleaned up at the end of a session regardless of whether it completes successfully or if it crashes.

Posted by Rob Fitzpatrick on 28-Feb-2012 13:09

Understood.  Personally, I like to see evidence of prior crashes that have happened, rather than having the evidence neatly swept under the rug.

Posted by Tim Kuehn on 28-Feb-2012 13:51

then put -t in all your .pf's and your fine.

I would note that there are counter-examples where you don't wnat temp files hanging around, and that's when TDE is in use, because TT data is stored in an un-encrypted form.

Posted by egarcia on 28-Feb-2012 15:19

You can add the -t parameter to the $DLC/startup.pf file if you want all the Progress executables to use it.

You need to make sure that the permissions on the temporary files are correct. (You need to do this whether or not you use -t.)

If you are using -t, you may also want to specify the temporary directory (-T) so that the files are created at a location different than the working directory and the files then are organized.

Specially if you are running a large number of OpenEdge sessions.

I hope this helps.

Posted by Rob Fitzpatrick on 28-Feb-2012 17:15

Thanks Edsel.  Aside from the functional differences mentioned above, can you advise whether there are also performance differences between using or not using -t?

Posted by egarcia on 29-Feb-2012 16:15

You are welcome.

The temporary files are unlinked using the unlink() function on UNIX/Linux (as seen in strace) and the access to the files are done with the file handle already obtained, same as linked files.

I do not think that operating systems would treat unlinked files differently either.

I did notice that the man page for unlink(2) (man -s 2 unlink) mentions about a bug with NFS (I do not know if it is current or in what implementation it exists).

I do not think that people would put temporary files on a network file system because the is a performance hit (network is slower than disk and RAM). However, if this is the scenario, then it might be better to use the -t option to avoid running into the NFS bug on Linux when using the unlink() function.

Temporary files should be written to a local file system (device) for best performance.

I hope this helps.

Posted by Thomas Mercer-Hursh on 29-Feb-2012 18:08

You do understand though, that since the filename contain the process ID, each new run creates a new set of files.  Not cleaning them up can leave you with a *lot* of files for long dead sessions, especially if you use -T to put them into a shared area for a large user count.

Posted by Rob Fitzpatrick on 01-Mar-2012 08:41

Actually, I think it is a thread ID that makes up part of the file name, but your point is well taken.

My intent is to use -t with -T pointed to a shared location, on a temporary basis, to gauge the amount of temp space required day-to-day in a particular environment.  Then I will set up a tmfs file system with an appropriate amount of space, and point -T to that (without -t).

This thread is closed