IT seems that when i transfer a file from the appserver to a remote client it works fairly well with Standard Progress ABL code and Raw datatype. However for some reason when the file on the appserver exceeds 3.5 GB the client side file just keeps growing uncontrollably. So if the file on the appserver was 4 GB then the client side file will grow until it consumes all the disk space on the client. (Maybe this is a known bug)
I am using RAW datatype to store 30k Chunks of data and then sending it back to a client machine and reconstructing the file . I need to know if this is the best way or if there is a faster or better way to transfer large files from an appserver to a remote client.
here is the code that pulls the data from the file:
import stream ghStream unformatted grChunk no-error.
/* If there is an error return that there are no more chunks */
if error-status:error = YES then
assign lMoreChunks = NO
error-status:error = NO.
else
lMoreChunks = YES.
/* Find the first temp table record */
create gttRaw.
/* Set the raw data for the temp table */
assign gttRaw.RawData = grChunk.
Then a snippet of client side code does this:
/* Get the first temp table record. */
find first gttRaw.
/* Write the data to a stream out to local file*/
put stream strAsFile control gttRaw.RawData.
Can anyone suggest other ways to transfer a file? Passing a memptr to a 4GB file might cause problems on the appserver side depending on whether or not there is enough RAM to store the data.
Actually. I was hoping for more ABL related ideas or solutions. The client and appserver are on a LAN and just would like to copy the files through the appserver. I don't want to do OS copies as I don't want to open file shares. I am wondering if there are some Memptr tricks or LOB copying techniques across the appserver boundary.
You don't check lMoreChunks, and IMPORT failures don't always throw errors the way you'd expect. Check ERROR-STATUS:NUM-MESSAGES to make sure there's no error / warning condition to handle. LENGTH(grChunk) < 30K may also be a better check for being at the last record.
COPY-LOB is another option, although you would need more overhead to keep track of where you are in the file.
This would be how I implement it.
serverside (copyfile.p):
define output parameter mFile as memptr no-undo.
copy-lob file "<file>" to mFile.
finally:
set-size(mFile) = 0.
end.
clientside:
def var mFileCopy as memptr no-undo.
run copyfile.p on server (output mFileCopy).
coply-lob mFileCopy to file "<filepath>".
set-size(mFileCopy) = 0.
hth
Thank you all for your help. This give me something else to work with.
The copy-lob sounds really simply, but it would seem that if the file was 10 GB then I would need to make sure there was enough RAM on the system to efficiently handle that .
Tim, Thanks for the insight into IMPORT function failures as I was not aware that if it had reached the end of the file, it would not work. It seems that the error status check works really well for files under 3.5 GB, but past that there must be something in the way Progress handles the file that causes it to only throw a warning or some other message.
[quote user="dana"]The copy-lob sounds really simply, but it would seem that if the file was 10 GB then I would need to make sure there was enough RAM on the system to efficiently handle that .[/quote]
Worth looking into some of the less commonly used options of the COPY-LOB in that case:
1. COPY-LOB FROM ... STARTING AT <offset> FOR <length> can be used to chunk the data on the server side.
(Where chunk size can be much larger than the 30k the RAW type allows)
2. COPY-LOB TO FILE ... APPEND can be used on the client side to glue things back together.
Just beware that with 10.2B STARTING-AT is broken.
We've had Java HeapSpace issues with large temp tables over the AppServer boundary. Just mentioning it as a potential failure point for that solution. It's easy enough to increase the HeapSpace on the AppServer, but if the HeapSpace is blown then it brings down the AppServer so test first.
My code does that currently , but does only one record at a time. The problem I have mostly is when it gets to the end of the source file. When the file is larger than 3.5 GB something fails and it just keeps sending data to the client and then the client side file grows till it consumes all disk space. I think that like Tim Kuehn mentioned .. the error-status check may not always work as the end of the file does not trigger the status to be changed for some reason. Maybe there is a Progress bug that causes this to fail with Large files.
32 bit Windows
and are you using an OpenEdge version that has 4GL large file support ? I don't recall when we added that.
-gus
> On Sep 28, 2015, at 12:04 PM, Brian K. Maher wrote:
>
> Update from Progress Community [https://community.progress.com/]
>
> Brian K. Maher [https://community.progress.com/members/maher]
>
> Dana,
>
> Is your client process 32 or 64 bit?
>
> Brian
>
> View online [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/72333#72333]
>
> You received this notification because you subscribed to the forum. To unsubscribe from only this thread, go here [https://community.progress.com/community_groups/openedge_development/f/19/t/20414/mute].
>
> Flag [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/72333?AbuseContentId=9822a070-ce5b-46fa-9881-9f59e8ecf435&AbuseContentTypeId=f586769b-0822-468a-b7f3-a94d480ed9b0&AbuseFlag=true] this post as spam/abuse.
Another test would be to have the client read a big file and write what it read to another file.
If it doesn't stop, then there's your smoking gun.
This is just an OS file that is being copied from the appserver to a remote client. It has nothing to do with the Database. And we do have large files enabled.
I don't think its a client side issue. I am pretty sure its the appserver process that is the one that has some problem and it is difficult for us to just change it to 64 Bit.
In that case spin up an appserver on your test machine and run the test there. If you get the same results in the AS session, then there's your smoking gun.
Thanks Tim, I think that there are some really good leads on here that I can run with.
I could, but if the end result is that the 64 does fix it, it still will not help me as we cannot move all of our customers to 64 bit right away and I need to be able to work with this as a 32 bit appserver for now unfortunately. We are currently on 11.3.3 and planning on going to 11.6 - 32/64 bit the beginning of next year.
Knowing a 64bit platform would fix the problem does help - because then you know where the issue is and can proceed accordingly.
In such a situation, a potential solution would be to have an external program break the file up into 3G chunks, ship the chunks over the wire to the client, then have the client use an external program to re-assemble the chunks into one file again.
Looks like you put everything in one temp-table with a blob field and pass that trough the wire. This is 'streaming' to the client so it might work although you do load the complete file on the appsrv side (will probably end-up landing on the disk again if not enough memory).
Instead you can try to change that one-shoot call to a protocol like sequence, and have multiple calls from the client passing the file path, start offset and chunk size - have the server return the data chunk and remaining bytes (can only send that when the first chunk was requested if there is more to stream). The client should keep on asking new chunks till completed... more calls for larger files, you might try to play with the 'buffer size' to find a good balance.
Not going to be using Copy-lob after reading Post: community.progress.com/.../55684
So it seems that this is really an issue with any file over 2GB regardless of whether i use RAW or Copy-LOB. I was trying copy-lob but Trying to figure out how to determine when i am getting to the end of a file.
define input parameter pcSourceFile as character no-undo.
define input-output parameter FileOffsetValue as int64 init 1.
define output parameter pcfilechunck as longchar.
define output parameter lMoreChunks as logical no-undo init YES.
define variable iCurrentChunksize as int64 no-undo.
&GLOBAL-DEFINE FILECHUNK-SIZE 120000
error-status:error = FALSE.
iCurrentChunksize = {&FILECHUNK-SIZE} .
/* Copy directly from the file to a long char to be sent back to the client */
COPY-LOB FROM FILE pcSourceFile STARTING AT FileOffsetValue FOR iCurrentChunksize TO pcfilechunck no-error.
/* Increment the offset. */
FileOffsetValue = FileOffsetValue + length(pcfilechunck).
/* Check to see if we are at the end of the file */
if length(pcfilechunck) < {&FILECHUNK-SIZE} or error-status:error = TRUE then
lMoreChunks = NO.
I am so close with this one.. The file copies really fast with this method.. Just need to figure out why it stops at 28 k before the end of the file.
ugh.. hit the 4GB limit on copy-lob. Is there a T-fix for this for 11.3.3 ?
So i guess I may have to submit a support ticket and get an obvious answer back that copy-lob is limited to 4GB in winodws 32 bit 11.3.3 of openedge.
So the code below works up to 4GB: this is the server code
define input parameter pcSourceFile as character no-undo.
define input parameter piMaxfileoffset as int64 no-undo.
define input-output parameter FileOffsetValue as int64.
define output parameter pcfilechunk as longchar.
define output parameter lMoreChunks as logical no-undo init YES.
define variable iCurrentChunksize as int64 no-undo.
&GLOBAL-DEFINE FILECHUNK-SIZE 120000
error-status:error = FALSE.
iCurrentChunksize = {&FILECHUNK-SIZE} .
if piMaxfileoffset - FileOffsetValue < {&FILECHUNK-SIZE} then
iCurrentChunksize = (piMaxfileoffset - FileOffsetValue) + 1.
/* Copy directly from the file to a long char to be sent back to the client */
COPY-LOB FROM FILE pcSourceFile STARTING AT FileOffsetValue FOR iCurrentChunksize TO pcfilechunk no-error.
/* Increment the offset. */
FileOffsetValue = FileOffsetValue + iCurrentChunksize.
log-manager:write-message (string (FileOffsetValue)).
/* Check to see if we are at the end of the file */
if length(pcfilechunk) < {&FILECHUNK-SIZE} or error-status:error = TRUE then
lMoreChunks = NO.
the client code :
run get-end-of-file-offeset in hGetUpdate (input pcASFileName,
output iendoffileoffset).
/* While there are more chunks of data available then */
do while lMoreChunks:
/* Get a chunk from the persitent procedure.
We are using a chunk temp table record instead of
just an output parameter. This is because the system was crashing with just sending back
a raw field */
run get-lob-from-largefile in hGetUpdate (input pcASFileName,
input iendoffileoffset,
input-output iFileOffset,
output lcfileChunk,
output lMoreChunks).
IF lMoreChunks then
copy-lob From lcfileChunk to FILE pcClientFileName APPEND no-convert.
end.
if length(lcfileChunk) > 0 and length (lcfileChunk) < {&FILE-CHUNK-SIZE} then
copy-lob From lcfileChunk to FILE pcClientFileName APPEND no-convert.
fyi: the documented maximum LOB size is 1 Gigabyte.
> On Nov 11, 2015, at 4:04 PM, dana wrote:
>
> Update from Progress Community [https://community.progress.com/]
>
> dana [https://community.progress.com/members/dana]
>
> So it seems that this is really an issue with any file over 2GB regardless of whether i use RAW or Copy-LOB. I was trying copy-lob but Trying to figure out how to determine when i am getting to the end of a file.
>
> define input parameter pcSourceFile as character no-undo.
>
> define input-output parameter FileOffsetValue as int64 init 1.
>
> define output parameter pcfilechunck as longchar.
>
> define output parameter lMoreChunks as logical no-undo init YES.
>
> define variable iCurrentChunksize as int64 no-undo.
>
> &GLOBAL-DEFINE FILECHUNK-SIZE 120000
>
> error-status:error = FALSE.
>
> iCurrentChunksize = {&FILECHUNK-SIZE} .
>
> /* Copy directly from the file to a long char to be sent back to the client */
>
> COPY-LOB FROM FILE pcSourceFile STARTING AT FileOffsetValue FOR iCurrentChunksize TO pcfilechunck no-error.
>
> /* Increment the offset. */
>
> FileOffsetValue = FileOffsetValue + length(pcfilechunck).
>
> /* Check to see if we are at the end of the file */
>
> if length(pcfilechunck)
>
> lMoreChunks = NO.
>
> View online [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/75006#75006]
>
> You received this notification because you subscribed to the forum. To unsubscribe from only this thread, go here [https://community.progress.com/community_groups/openedge_development/f/19/t/20414/mute].
>
> Flag [https://community.progress.com/community_groups/openedge_development/f/19/p/20414/75006?AbuseContentId=14f1ff56-513c-4bbd-bea6-05aa6237f85f&AbuseContentTypeId=f586769b-0822-468a-b7f3-a94d480ed9b0&AbuseFlag=true] this post as spam/abuse.
The 4gb size limit is by windows fat file system... You should get win ntfs or linux ext3...